This study explores the ethical challenges posed by digital technology applications (DTAs), with a particular focus on artificial intelligence (AI) in algorithmic systems, within a structured framework grounded in the nine principles established by the Association for Computing Machinery (ACM) in 2022. As algorithmic technologies advance at a rapid pace, there is a pressing need to ensure that these systems are not only technically proficient but also ethically sound—secure, transparent, and responsible throughout their entire lifecycle. The ethical concerns surrounding AI and algorithms are becoming more urgent as they are increasingly embedded in decision-making processes that affect individuals, communities, and society at large. This paper presents the early stages of an ongoing research project, which concern a systematic literature review (SLR) and the development of a theoretical framework. The primary objective of the SLR is to critically analyse existing literature by mapping each ACM principle onto specific issues identified in the literature, ultimately forming a matrix that links key ethical concerns—such as transparency, fairness, and accountability—with actionable guidelines for the design and maintenance of algorithmic systems. The systematic literature review employs a rigorous search strategy across major academic databases, including IEEE Xplore, Web of Science, and Scopus. These databases were selected for their broad coverage of multidisciplinary research, ensuring access to high-quality peer-reviewed studies across the fields of computing, ethics, social sciences, and data science. To collect studies, we employed a set of carefully selected keywords based on each of the ACM ethical principles. The ACM Code of Ethics is widely regarded as a comprehensive and rigorous ethical framework that has been adopted across technological domains, offering a structured approach to addressing critical issues such as transparency, privacy, fairness, and accountability in digital systems. The search was filtered to include only peer-reviewed articles published within the last ten years, ensuring that the review reflects the most relevant and current debates in this field. Over the course of this review, the team has collected over 300 peer-reviewed articles that, to ensure rigor, were subjected to a three-stage screening procedure: articles were initially filtered based on the relevance of their titles to the ACM principles; in a second stage, abstracts were reviewed to determine whether the articles specifically addressed the ethical dimensions of data management and algorithmic systems; the final stage involved a full-text review of the selected articles to ensure that they directly engaged with the ethical issues identified in the ACM Code of Ethics and provided meaningful insights into the challenges posed by algorithmic technologies. Throughout the screening process, a total of 80 articles were selected for in-depth analysis. These articles were thoroughly examined to extract key information, including the specific ethical principle(s) addressed, the context in which the ethical challenges arose (e.g., algorithmic processes, data management strategies), and the practical or theoretical contributions each study made to the field. The initial analysis identifies key themes in the literature: Transparency is a major concern, with a focus on making AI and other digital technologies understandable and interpretable to users. Fairness is also highlighted, with studies addressing biases in algorithmic decision-making, particularly regarding minority representation and socio-economic disparities, and stressing the need for more inclusive data in AI training. Accountability is another critical issue, driven by the "black box" problem in AI, which makes algorithmic decision-making opaque and difficult to trace. Privacy concerns are prevalent, especially in relation to big data, with calls for stronger regulations and data protection frameworks to safeguard individual rights. Building on these preliminary findings, the next phase of this research will involve the development of a theoretical framework that maps each of the ACM principles onto the ethical challenges identified in the literature. This framework will be constructed iteratively, with the goal of creating a matrix that links each ethical principle to actionable guidelines for the design, implementation, and governance of digital technologies. These guidelines will be designed to assist organizations in developing responsible, transparent, and ethical digital systems. In addition to the theoretical framework, the research will proceed in two further stages. First, in-depth case studies of companies deploying AI and other digital technologies will be conducted. These case studies will examine the ethical challenges faced by these companies, their implementation practices, and the policies they have adopted to mitigate biases and uphold privacy. Through interviews and document analysis, this phase will explore how companies operationalize ethical principles in real-world contexts. Second, the research will culminate in the development and validation of a practical model for ethical digital technology governance. Ultimately, in its completeness, this research aspires to establish a foundational ethical framework that bridges theoretical and practical aspects of responsible digital innovation, providing policymakers with actionable strategies and guidelines for the creation of ethical, transparent, and sustainable algorithmic systems.
Balancing technological advancements with individual rights: ethical issues in managing digital data and algorithmic systems
Confetto Maria Giovanna;
2024-01-01
Abstract
This study explores the ethical challenges posed by digital technology applications (DTAs), with a particular focus on artificial intelligence (AI) in algorithmic systems, within a structured framework grounded in the nine principles established by the Association for Computing Machinery (ACM) in 2022. As algorithmic technologies advance at a rapid pace, there is a pressing need to ensure that these systems are not only technically proficient but also ethically sound—secure, transparent, and responsible throughout their entire lifecycle. The ethical concerns surrounding AI and algorithms are becoming more urgent as they are increasingly embedded in decision-making processes that affect individuals, communities, and society at large. This paper presents the early stages of an ongoing research project, which concern a systematic literature review (SLR) and the development of a theoretical framework. The primary objective of the SLR is to critically analyse existing literature by mapping each ACM principle onto specific issues identified in the literature, ultimately forming a matrix that links key ethical concerns—such as transparency, fairness, and accountability—with actionable guidelines for the design and maintenance of algorithmic systems. The systematic literature review employs a rigorous search strategy across major academic databases, including IEEE Xplore, Web of Science, and Scopus. These databases were selected for their broad coverage of multidisciplinary research, ensuring access to high-quality peer-reviewed studies across the fields of computing, ethics, social sciences, and data science. To collect studies, we employed a set of carefully selected keywords based on each of the ACM ethical principles. The ACM Code of Ethics is widely regarded as a comprehensive and rigorous ethical framework that has been adopted across technological domains, offering a structured approach to addressing critical issues such as transparency, privacy, fairness, and accountability in digital systems. The search was filtered to include only peer-reviewed articles published within the last ten years, ensuring that the review reflects the most relevant and current debates in this field. Over the course of this review, the team has collected over 300 peer-reviewed articles that, to ensure rigor, were subjected to a three-stage screening procedure: articles were initially filtered based on the relevance of their titles to the ACM principles; in a second stage, abstracts were reviewed to determine whether the articles specifically addressed the ethical dimensions of data management and algorithmic systems; the final stage involved a full-text review of the selected articles to ensure that they directly engaged with the ethical issues identified in the ACM Code of Ethics and provided meaningful insights into the challenges posed by algorithmic technologies. Throughout the screening process, a total of 80 articles were selected for in-depth analysis. These articles were thoroughly examined to extract key information, including the specific ethical principle(s) addressed, the context in which the ethical challenges arose (e.g., algorithmic processes, data management strategies), and the practical or theoretical contributions each study made to the field. The initial analysis identifies key themes in the literature: Transparency is a major concern, with a focus on making AI and other digital technologies understandable and interpretable to users. Fairness is also highlighted, with studies addressing biases in algorithmic decision-making, particularly regarding minority representation and socio-economic disparities, and stressing the need for more inclusive data in AI training. Accountability is another critical issue, driven by the "black box" problem in AI, which makes algorithmic decision-making opaque and difficult to trace. Privacy concerns are prevalent, especially in relation to big data, with calls for stronger regulations and data protection frameworks to safeguard individual rights. Building on these preliminary findings, the next phase of this research will involve the development of a theoretical framework that maps each of the ACM principles onto the ethical challenges identified in the literature. This framework will be constructed iteratively, with the goal of creating a matrix that links each ethical principle to actionable guidelines for the design, implementation, and governance of digital technologies. These guidelines will be designed to assist organizations in developing responsible, transparent, and ethical digital systems. In addition to the theoretical framework, the research will proceed in two further stages. First, in-depth case studies of companies deploying AI and other digital technologies will be conducted. These case studies will examine the ethical challenges faced by these companies, their implementation practices, and the policies they have adopted to mitigate biases and uphold privacy. Through interviews and document analysis, this phase will explore how companies operationalize ethical principles in real-world contexts. Second, the research will culminate in the development and validation of a practical model for ethical digital technology governance. Ultimately, in its completeness, this research aspires to establish a foundational ethical framework that bridges theoretical and practical aspects of responsible digital innovation, providing policymakers with actionable strategies and guidelines for the creation of ethical, transparent, and sustainable algorithmic systems.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.