In Federated Learning (FL) clients collaboratively train a Machine Learning (ML) model by sharing updates computed over their private data. These updates are aggregated to form a global model, which requires clients to trust the central server. However, clients have no direct means to verify whether their updates were incorporated into the global model. This lack of transparency raises challenges that may discourage participation in the federation. For instance, a malicious server might exclude legitimate clients to deny them rewards or recognition. Even in benign scenarios, updates may be disregarded due to network failures or predefined aggregation conditions (e.g., a quorum). This highlights the need for mechanisms that let clients independently verify their inclusion. To this specific purpose, we propose a novel approach called Membership Proof in Federated Learning (MPFL), which enables client-side verifiability of participation. In MPFL, model updates are aggregated via a smart contract, which also generates a unique cryptographic proof of participation for each update by using cryptographic accumulators. By leveraging blockchain and smart contracts, our approach enhances system trustworthiness, while cryptographic proofs provide an efficient and privacy-preserving method for clients to verify inclusion. In addition, the paper reports on how we have implemented MPFL and extensively evaluated it across three diverse datasets and ML architectures, thus demonstrating its effectiveness and practical viability.
I Trained That! Client-Side Proof of Participation in Federated Learning
Mazzocca, Carlo
;
2025
Abstract
In Federated Learning (FL) clients collaboratively train a Machine Learning (ML) model by sharing updates computed over their private data. These updates are aggregated to form a global model, which requires clients to trust the central server. However, clients have no direct means to verify whether their updates were incorporated into the global model. This lack of transparency raises challenges that may discourage participation in the federation. For instance, a malicious server might exclude legitimate clients to deny them rewards or recognition. Even in benign scenarios, updates may be disregarded due to network failures or predefined aggregation conditions (e.g., a quorum). This highlights the need for mechanisms that let clients independently verify their inclusion. To this specific purpose, we propose a novel approach called Membership Proof in Federated Learning (MPFL), which enables client-side verifiability of participation. In MPFL, model updates are aggregated via a smart contract, which also generates a unique cryptographic proof of participation for each update by using cryptographic accumulators. By leveraging blockchain and smart contracts, our approach enhances system trustworthiness, while cryptographic proofs provide an efficient and privacy-preserving method for clients to verify inclusion. In addition, the paper reports on how we have implemented MPFL and extensively evaluated it across three diverse datasets and ML architectures, thus demonstrating its effectiveness and practical viability.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


