Graph Neural Networks (GNNs) have demonstrated outstanding capabilities in processing graph-structured data and are increasingly being integrated into large-scale pre-trained models, such as Large Language Models (LLMs), to enhance structural reasoning, knowledge retrieval, and memory management. The expansion of their application scope imposes higher requirements on the robustness of GNNs. However, as GNNs are applied to more dynamic and heterogeneous environments, they become increasingly vulnerable to real-world perturbations. In particular, graph data frequently encounters joint adversarial perturbations that simultaneously affect both structures and features, which are significantly more challenging than isolated attacks. These disruptions, caused by incomplete data, malicious attacks, or inherent noise, pose substantial threats to the stable and reliable performance of traditional GNN models. To address this issue, this study proposes the Dual-Shield Graph Neural Network (DSGNN), a defense model that simultaneously mitigates structural and feature perturbations. DSGNN utilizes two parallel GNN channels to independently process structural noise and feature noise, and introduces an adaptive fusion mechanism that integrates information from both pathways to generate robust node representations. Theoretical analysis demonstrates that DSGNN achieves a tighter robustness boundary under joint perturbations compared to conventional single-channel methods. Experimental evaluations across Cora, CiteSeer, and Industry datasets show that DSGNN achieves the highest average classification accuracy under various adversarial settings, reaching 81.24%, 71.94%, and 81.66%, respectively, outperforming GNNGuard, GCN-Jaccard, GCN-SVD, RGCN, and NoisyGNN. These results underscore the importance of multi-view perturbation decoupling in constructing resilient GNN models for real-world applications.

DSGNN: Dual-Shield Defense for Robust Graph Neural Networks

Siano P.
2025

Abstract

Graph Neural Networks (GNNs) have demonstrated outstanding capabilities in processing graph-structured data and are increasingly being integrated into large-scale pre-trained models, such as Large Language Models (LLMs), to enhance structural reasoning, knowledge retrieval, and memory management. The expansion of their application scope imposes higher requirements on the robustness of GNNs. However, as GNNs are applied to more dynamic and heterogeneous environments, they become increasingly vulnerable to real-world perturbations. In particular, graph data frequently encounters joint adversarial perturbations that simultaneously affect both structures and features, which are significantly more challenging than isolated attacks. These disruptions, caused by incomplete data, malicious attacks, or inherent noise, pose substantial threats to the stable and reliable performance of traditional GNN models. To address this issue, this study proposes the Dual-Shield Graph Neural Network (DSGNN), a defense model that simultaneously mitigates structural and feature perturbations. DSGNN utilizes two parallel GNN channels to independently process structural noise and feature noise, and introduces an adaptive fusion mechanism that integrates information from both pathways to generate robust node representations. Theoretical analysis demonstrates that DSGNN achieves a tighter robustness boundary under joint perturbations compared to conventional single-channel methods. Experimental evaluations across Cora, CiteSeer, and Industry datasets show that DSGNN achieves the highest average classification accuracy under various adversarial settings, reaching 81.24%, 71.94%, and 81.66%, respectively, outperforming GNNGuard, GCN-Jaccard, GCN-SVD, RGCN, and NoisyGNN. These results underscore the importance of multi-view perturbation decoupling in constructing resilient GNN models for real-world applications.
2025
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11386/4927067
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact