Artificial intelligence (AI) characterizes a new generation of technologies capable of interacting with the environment and aiming to simulate human intelligence. The success of integrating AI into organizations critically depends on workers' trust in AI technology. Trust is a central component of the interaction between people and AI, as incorrect levels of trust may cause misuse, abuse or disuse of the technology. The European Commission's High-level Expert Group on AI (HLEG) have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI. This article investigates the links between trust in AI, concerns related to AI use, and the ethics related to such use. We used data collected in 2019 from more than 30,000 individuals across the EU28. The data focuses on living conditions, trust, and AI uses and concerns. An econometric model is used. The endogenous variable is an ordered measure of trust in AI. We use an ordered logit model to highlight the factors associated with an increased level of trust in AI in Europe. The results show that many concerns related to AI use are linked to AI trust, and the ability to try out AI applications will also have an impact on initial trust. To enhance trust, practitioners can try to maximize the technological features in AI systems. The representation of the AI as a humanoid or a loyal pet (e.g., a dog) will facilitate initial trust formation. Moreover, findings reveal an unequal degree of trust in AI across countries.

To trust or not to trust? An assessment of trust in AI-based systems: Concerns, ethics and contexts

Fiore U.;
2022

Abstract

Artificial intelligence (AI) characterizes a new generation of technologies capable of interacting with the environment and aiming to simulate human intelligence. The success of integrating AI into organizations critically depends on workers' trust in AI technology. Trust is a central component of the interaction between people and AI, as incorrect levels of trust may cause misuse, abuse or disuse of the technology. The European Commission's High-level Expert Group on AI (HLEG) have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI. This article investigates the links between trust in AI, concerns related to AI use, and the ethics related to such use. We used data collected in 2019 from more than 30,000 individuals across the EU28. The data focuses on living conditions, trust, and AI uses and concerns. An econometric model is used. The endogenous variable is an ordered measure of trust in AI. We use an ordered logit model to highlight the factors associated with an increased level of trust in AI in Europe. The results show that many concerns related to AI use are linked to AI trust, and the ability to try out AI applications will also have an impact on initial trust. To enhance trust, practitioners can try to maximize the technological features in AI systems. The representation of the AI as a humanoid or a loyal pet (e.g., a dog) will facilitate initial trust formation. Moreover, findings reveal an unequal degree of trust in AI across countries.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: http://hdl.handle.net/11386/4794169
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact