Although Large Language Models (LLMs) are frequently used to generate text, their ability to maintain neutrality when addressing sensitive topics remains a concern. By supplying inputs with predetermined positions and examining the replies produced, this study explores the positions taken by three LLMs- Mixtral-8x7B, Gemma29B, and LLaMA-3.1-8B- on particular issues, including abortion, death penalty, marijuana legalization, nuclear energy, and feminism. The stance of each response was measured, revealing that the models exhibit polarization toward specific positions on these topics. The results point to a serious vulnerability in the models’ ability to remain neutral since their answers frequently reflect a prevailing viewpoint in sensitive contexts. This behavior highlights bias and raises questions about how it can affect users, who might be trapped in a cognitive filter bubble influenced by the model’s polarized responses. This work sheds light on the challenges LLMs’ bias poses, emphasizing the need for strategies to ensure their neutrality and mitigate the risks associated with reinforcing distorted perspectives during user interactions.

Cognitive Filter Bubble: Investigating Bias and Neutrality Vulnerabilities of LLMs in Sensitive Contexts

Maria Di Gisi;Giuseppe Fenza;Mariacristina Gallo;Vincenzo Loia;Claudio Stanzione
2025

Abstract

Although Large Language Models (LLMs) are frequently used to generate text, their ability to maintain neutrality when addressing sensitive topics remains a concern. By supplying inputs with predetermined positions and examining the replies produced, this study explores the positions taken by three LLMs- Mixtral-8x7B, Gemma29B, and LLaMA-3.1-8B- on particular issues, including abortion, death penalty, marijuana legalization, nuclear energy, and feminism. The stance of each response was measured, revealing that the models exhibit polarization toward specific positions on these topics. The results point to a serious vulnerability in the models’ ability to remain neutral since their answers frequently reflect a prevailing viewpoint in sensitive contexts. This behavior highlights bias and raises questions about how it can affect users, who might be trapped in a cognitive filter bubble influenced by the model’s polarized responses. This work sheds light on the challenges LLMs’ bias poses, emphasizing the need for strategies to ensure their neutrality and mitigate the risks associated with reinforcing distorted perspectives during user interactions.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11386/4920489
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact