Technological evolution has enabled the development of new artificial intelligence (AI) models with generative capabilities. Among them, one of the most discussed is the virtual agent ChatGPT. This chatbot may occasionally produce fake information, as also declared by the producer OpenAI. Such a model may provide very useful support in several tasks, ranging from text summarization to programming. The research community has marginally investigated the impact that fake information created by AI models has on the users’ perceptions and on their belief in AI. We analyzed the impact of the fake information produced by AI on user perceptions, specifically trust and satisfaction, by performing a user study on ChatGPT. An additional issue is assessing whether the early or late knowledge of the possibility of the tool generating fake information has a different impact on the users’ perceptions. We conducted an experiment, involving 62 university students, a category of users who may employ tools such as ChatGPT extensively. The experiment consisted of a guided interaction with ChatGPT. Some of the participants experienced the failure of the chatbot, while a control group only received correct and reliable answers. We collected participants’ perceptions of trust, satisfaction, and usability, together with the net promoter score (NPS). The results demonstrated a statistically significant difference in trust and satisfaction between the users who early experienced fake information production compared to those who discovered ChatGPT’s faulty behaviors later during the interaction. Also, there is no statistically significant difference among the users who received the late fake information and the control group (no fake information). Usability and the NPS also resulted higher when the fake news was detected in the late interaction. When users are aware of the fake information generated by ChatGPT their trust and satisfaction decrease, especially when they impact on this at the early stage of use of the chatbot. Nevertheless, the perception of trust and satisfaction still remains high, as some of the users are still enthusiastic; others consider a more conscious use of the tool in terms of support to be verified. A useful strategy could be to favor a critical use of ChatGPT, letting young people to verify the provided information. This should be a new way to perform learning activities.

Believe in Artificial Intelligence? A User Study on the ChatGPT’s Fake Information Impact

Amaro I.;Francese R.
;
Tucci C.
2023-01-01

Abstract

Technological evolution has enabled the development of new artificial intelligence (AI) models with generative capabilities. Among them, one of the most discussed is the virtual agent ChatGPT. This chatbot may occasionally produce fake information, as also declared by the producer OpenAI. Such a model may provide very useful support in several tasks, ranging from text summarization to programming. The research community has marginally investigated the impact that fake information created by AI models has on the users’ perceptions and on their belief in AI. We analyzed the impact of the fake information produced by AI on user perceptions, specifically trust and satisfaction, by performing a user study on ChatGPT. An additional issue is assessing whether the early or late knowledge of the possibility of the tool generating fake information has a different impact on the users’ perceptions. We conducted an experiment, involving 62 university students, a category of users who may employ tools such as ChatGPT extensively. The experiment consisted of a guided interaction with ChatGPT. Some of the participants experienced the failure of the chatbot, while a control group only received correct and reliable answers. We collected participants’ perceptions of trust, satisfaction, and usability, together with the net promoter score (NPS). The results demonstrated a statistically significant difference in trust and satisfaction between the users who early experienced fake information production compared to those who discovered ChatGPT’s faulty behaviors later during the interaction. Also, there is no statistically significant difference among the users who received the late fake information and the control group (no fake information). Usability and the NPS also resulted higher when the fake news was detected in the late interaction. When users are aware of the fake information generated by ChatGPT their trust and satisfaction decrease, especially when they impact on this at the early stage of use of the chatbot. Nevertheless, the perception of trust and satisfaction still remains high, as some of the users are still enthusiastic; others consider a more conscious use of the tool in terms of support to be verified. A useful strategy could be to favor a critical use of ChatGPT, letting young people to verify the provided information. This should be a new way to perform learning activities.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11386/4841235
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 8
  • ???jsp.display-item.citation.isi??? 10
social impact