As the development of generative AI systems promises significant societal benefits, it also introduces new challenges and issues in the legal realm, especially in Criminal Law. On one hand, emerging tools like self-driving cars and autonomous AI agents are poised to assist humans in various activities, such as commuting, job execution, surgeries, artistic creation, and text proofreading. On the other hand, generative AI systems create a new risk zone within society that necessitates legal consideration. These AI systems are trained with extensive datasets and have the capability to learn from other AI or themselves. Consequently, as they evolve, it becomes challenging to comprehend how they make decisions, given that the data used is often inscrutable, akin to a 'black box.' In practical terms, this could mean scenarios where an advanced chatbot defames someone by spreading fake news, even without human input. Similarly, if a producer fails to provide input for a self-driving car to navigate ethical dilemmas in emergencies, the AI might make autonomous choices, causing irreversible consequences on the lives of human beings. Finally, while AI could enhance surgical procedures, reducing errors and injuries, it cannot eliminate risks entirely. These scenarios introduce a new set of risks to society due to the development of AI systems that Criminal Law must address. Determining the acceptable threshold of risk, even when well-designed AI systems make mistakes, becomes crucial to ensure that the progress is not hindered by the looming threat of criminal consequences. Furthermore, questions arise about responsibility for damages or risks outside the scope of the “accepted risk-zone” within AI activities: should the producer or user bear the risk of criminal offenses committed by an AI? Can an AI system be personally liable for such offenses? In this presentation, I will propose a potential solution to these legal dilemmas, aiming to advance future progress while ensuring protection against possible crimes committed by generative AI systems

Arriscado, mas útil: O novo desafio do Direito Penal na Era dos Sistemas de IA Generativa

Andrea R. Castaldo
;
Fabio Coppola
2024

Abstract

As the development of generative AI systems promises significant societal benefits, it also introduces new challenges and issues in the legal realm, especially in Criminal Law. On one hand, emerging tools like self-driving cars and autonomous AI agents are poised to assist humans in various activities, such as commuting, job execution, surgeries, artistic creation, and text proofreading. On the other hand, generative AI systems create a new risk zone within society that necessitates legal consideration. These AI systems are trained with extensive datasets and have the capability to learn from other AI or themselves. Consequently, as they evolve, it becomes challenging to comprehend how they make decisions, given that the data used is often inscrutable, akin to a 'black box.' In practical terms, this could mean scenarios where an advanced chatbot defames someone by spreading fake news, even without human input. Similarly, if a producer fails to provide input for a self-driving car to navigate ethical dilemmas in emergencies, the AI might make autonomous choices, causing irreversible consequences on the lives of human beings. Finally, while AI could enhance surgical procedures, reducing errors and injuries, it cannot eliminate risks entirely. These scenarios introduce a new set of risks to society due to the development of AI systems that Criminal Law must address. Determining the acceptable threshold of risk, even when well-designed AI systems make mistakes, becomes crucial to ensure that the progress is not hindered by the looming threat of criminal consequences. Furthermore, questions arise about responsibility for damages or risks outside the scope of the “accepted risk-zone” within AI activities: should the producer or user bear the risk of criminal offenses committed by an AI? Can an AI system be personally liable for such offenses? In this presentation, I will propose a potential solution to these legal dilemmas, aiming to advance future progress while ensuring protection against possible crimes committed by generative AI systems
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11386/4893499
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact