Federated Learning (FL) is a promising paradigm for collaboratively training Machine Learning (ML) models while preserving the privacy of data owners. By allowing participants to maintain their data on-site, FL avoids sending client local data to a central server for model training. However, despite its evident privacy benefits, it is not immune to security and privacy threats. Among these, Gradient Inversion Attacks (GIAs) stand out as one of the most critical as they exploit client's model updates to reconstruct local training data, breaking participant's privacy. This work presents a comprehensive systematization of GIAs in FL. First, we identify various threat models defining the adversary's knowledge and capabilities to perform these attacks. Then, we propose a systematic taxonomy to categorize GIAs, providing practical insights into their methods and applicability. Additionally, we explore defensive mechanisms designed to mitigate these attacks. We also systematize evaluation metrics used to measure the success of GIAs and assess the model's vulnerability before an attack. Finally, based on a thorough analysis of the existing literature, we identify key challenges and outline promising future research directions.

SoK: Gradient Inversion Attacks in Federated Learning

Vincenzo Carletti;Pasquale Foggia;Carlo Mazzocca;Giuseppe Parrella
;
Mario Vento
2025

Abstract

Federated Learning (FL) is a promising paradigm for collaboratively training Machine Learning (ML) models while preserving the privacy of data owners. By allowing participants to maintain their data on-site, FL avoids sending client local data to a central server for model training. However, despite its evident privacy benefits, it is not immune to security and privacy threats. Among these, Gradient Inversion Attacks (GIAs) stand out as one of the most critical as they exploit client's model updates to reconstruct local training data, breaking participant's privacy. This work presents a comprehensive systematization of GIAs in FL. First, we identify various threat models defining the adversary's knowledge and capabilities to perform these attacks. Then, we propose a systematic taxonomy to categorize GIAs, providing practical insights into their methods and applicability. Additionally, we explore defensive mechanisms designed to mitigate these attacks. We also systematize evaluation metrics used to measure the success of GIAs and assess the model's vulnerability before an attack. Finally, based on a thorough analysis of the existing literature, we identify key challenges and outline promising future research directions.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11386/4915861
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact