Since the harmful consequences of the online publication of fake news have emerged clearly, many research groups worldwide have started to work on the design and creation of systems able to detect fake news and entities that share it consciously. Therefore, manifold automatic, manual, and hybrid solutions have been proposed by industry and academia. In this article, we describe a deep investigation of the features that both from an automatic and a human point of view, are more predictive for the identification of social network profiles accountable for spreading fake news in the online environment. To achieve this goal, the features of the monitored users were extracted from Twitter, such as social and personal information as well as interaction with content and other users. Subsequently, we performed (i) an offline analysis realized through the use of deep learning techniques and (ii) an online analysis that involved real users in the classification of reliable/unreliable user profiles. The experimental results, validated from a statistical point of view, show which information best enables machines and humans to detect malicious users. We hope that our research work will provide useful insights for realizing ever more effective tools to counter misinformation and those who spread it intentionally.

Unreliable Users Detection in Social Media: Deep Learning Techniques for Automatic Detection

D'Aniello, Giuseppe;
2020-01-01

Abstract

Since the harmful consequences of the online publication of fake news have emerged clearly, many research groups worldwide have started to work on the design and creation of systems able to detect fake news and entities that share it consciously. Therefore, manifold automatic, manual, and hybrid solutions have been proposed by industry and academia. In this article, we describe a deep investigation of the features that both from an automatic and a human point of view, are more predictive for the identification of social network profiles accountable for spreading fake news in the online environment. To achieve this goal, the features of the monitored users were extracted from Twitter, such as social and personal information as well as interaction with content and other users. Subsequently, we performed (i) an offline analysis realized through the use of deep learning techniques and (ii) an online analysis that involved real users in the classification of reliable/unreliable user profiles. The experimental results, validated from a statistical point of view, show which information best enables machines and humans to detect malicious users. We hope that our research work will provide useful insights for realizing ever more effective tools to counter misinformation and those who spread it intentionally.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11386/4754782
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 66
  • ???jsp.display-item.citation.isi??? 30
social impact