The application of machine learning techniques to large and distributed data archives might result in the disclosure of sensitive information about the data subjects. Data often contain sensitive identifiable information, and even if these are protected, the excessive processing capabilities of current machine learning techniques might facilitate the identi-fication of individuals, raising privacy concerns. To this end, we propose a decision-support framework for data anonymization, which relies on a novel approach that exploits data correlations, expressed in terms of relaxed functional dependencies (RFDs) to identify data anonymization strategies providing suitable trade-offs between privacy and data utility. Moreover, we investigate how to generate anonymization strategies that leverage multiple data correlations simultaneously to increase the utility of anonymized datasets. In addi-tion, our framework provides support in the selection of the anonymization strategy to apply by enabling an understanding of the trade-offs between privacy and data utility offered by the obtained strategies. Experiments on real-life datasets show that our approach achieves promising results in terms of data utility while guaranteeing the desired privacy level, and it allows data owners to select anonymization strategies balancing their privacy and data utility requirements. (c) 2022 Elsevier Inc. All rights reserved.
A decision-support framework for data anonymization with application to machine learning processes
Caruccio, L;Desiato, D
;Polese, G;Tortora, G;
2022-01-01
Abstract
The application of machine learning techniques to large and distributed data archives might result in the disclosure of sensitive information about the data subjects. Data often contain sensitive identifiable information, and even if these are protected, the excessive processing capabilities of current machine learning techniques might facilitate the identi-fication of individuals, raising privacy concerns. To this end, we propose a decision-support framework for data anonymization, which relies on a novel approach that exploits data correlations, expressed in terms of relaxed functional dependencies (RFDs) to identify data anonymization strategies providing suitable trade-offs between privacy and data utility. Moreover, we investigate how to generate anonymization strategies that leverage multiple data correlations simultaneously to increase the utility of anonymized datasets. In addi-tion, our framework provides support in the selection of the anonymization strategy to apply by enabling an understanding of the trade-offs between privacy and data utility offered by the obtained strategies. Experiments on real-life datasets show that our approach achieves promising results in terms of data utility while guaranteeing the desired privacy level, and it allows data owners to select anonymization strategies balancing their privacy and data utility requirements. (c) 2022 Elsevier Inc. All rights reserved.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.