One of the major technological and scientific challenges in developing autonomous machines and robots is to ensure their ethical and safe behavior towards human beings. When dealing with autonomous machines the human operator is not present, so that the overall risk complexity has to be addressed to machine artificial intelligence and decision-making systems, which must be conceived and designed in order to ensure a safe and ethical behaviour. In this work a possible approach for the development of decision-making systems for autonomous machines will be proposed, based on the definition of general ethical criteria and principles. These principles concern the need to avoid or minimize the occurrence of harm for human beings, during the execution of the task the machine has been designed for. Within this scope, four fundamental problems can be introduced: 1. First Problem: Machine Ethics Principles or Laws Identification 2. Second Problem: Incorporating Ethics in the Machine 3. Third Problem: Human-Machine Interaction Degree Definition 4. Fourth Problem: Machine Misdirection Avoidance. This Ph.D. research activity has been mainly focused on First and Second Problems, with specific reference to safety aspects. Regarding First Problem, main scope of this work is on ensuring that an autonomous machine will act in a safe way, that is: • No harm is issued for surrounding human beings (non maleficence ethical principle) • In case a human being approaching a potential source of harm, the machine must act in such a way to minimize such harm with the best possible and available action (non-inaction ethical principle) and, when possible and not conflicting with above principles: • The machine must act in such a way to preserve its own integrity (self-preservation). Concerning Second Problem, the simplified version of some ethical principles reported above has been used to build a mathematical model of a safe decision system based on a game theoretical approach. When dealing just with safety and not with general ethics, it is possible to adopt some well-defined criteria in ensuring the machine behaviour is not issuing any harms towards human beings, such as: • Always ensure the machine is keeping a proper safety distance at a certain operating velocity • Always ensure that, within a certain range, the machine can detect the distance between a human being and the location of a potential harm. [edited by Author]

A game theoretical approach to safe decision making system development for autonomous machines / Gianpiero Negri , 2021 Mar 20., Anno Accademico 2019 - 2020. [10.14273/unisa-4535].

A game theoretical approach to safe decision making system development for autonomous machines

Negri, Gianpiero
2021

Abstract

One of the major technological and scientific challenges in developing autonomous machines and robots is to ensure their ethical and safe behavior towards human beings. When dealing with autonomous machines the human operator is not present, so that the overall risk complexity has to be addressed to machine artificial intelligence and decision-making systems, which must be conceived and designed in order to ensure a safe and ethical behaviour. In this work a possible approach for the development of decision-making systems for autonomous machines will be proposed, based on the definition of general ethical criteria and principles. These principles concern the need to avoid or minimize the occurrence of harm for human beings, during the execution of the task the machine has been designed for. Within this scope, four fundamental problems can be introduced: 1. First Problem: Machine Ethics Principles or Laws Identification 2. Second Problem: Incorporating Ethics in the Machine 3. Third Problem: Human-Machine Interaction Degree Definition 4. Fourth Problem: Machine Misdirection Avoidance. This Ph.D. research activity has been mainly focused on First and Second Problems, with specific reference to safety aspects. Regarding First Problem, main scope of this work is on ensuring that an autonomous machine will act in a safe way, that is: • No harm is issued for surrounding human beings (non maleficence ethical principle) • In case a human being approaching a potential source of harm, the machine must act in such a way to minimize such harm with the best possible and available action (non-inaction ethical principle) and, when possible and not conflicting with above principles: • The machine must act in such a way to preserve its own integrity (self-preservation). Concerning Second Problem, the simplified version of some ethical principles reported above has been used to build a mathematical model of a safe decision system based on a game theoretical approach. When dealing just with safety and not with general ethics, it is possible to adopt some well-defined criteria in ensuring the machine behaviour is not issuing any harms towards human beings, such as: • Always ensure the machine is keeping a proper safety distance at a certain operating velocity • Always ensure that, within a certain range, the machine can detect the distance between a human being and the location of a potential harm. [edited by Author]
20-mar-2021
Matematica, Fisica ed Applicazioni
Robotica
Teoria dei giochi
Intelligenza artificiale
Tibullo, Vincenzo
Attanasio, Carmine
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11386/4924532
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact