The governance system of artificial intelligence. Artificial intelligence has become part of our daily life horizon and will increasingly become a constitutive element. It transcends the sole profile of the enormous advantages that can easily be found to involve values more closely connected to the freedoms and fundamental rights of the person that can be compromised, such as the safety of individuals, their health, privacy and the protection of personal data, integrity, human dignity, self-determination and non-discrimination. There is no doubt that the AI Act represents, together with what the GDPR was eight years ago, Europe's most advanced attempt to outline an anthropocentric strategy for the governance of technology, which today increasingly loses its instrumental character to rise to an end in itself; it does not limit itself to proposing solutions, but poses new problems and undermines axiological coordinates, redesigning the geography of power and its system of checks and balances. The objective is to promote the spread of anthropocentric AI, reliable and compliant with European values aimed at protecting people and, at the same time, supporting innovation and improving the functioning of the internal market. The new Regulation, in outlining the governance system of artificial intelligence, establishes a specific reserve of competence in favor of the data protection authorities, in particular in sectors (immigration, law enforcement activities, justice, democratic processes) in which the power algorithmic risks amplifying the structural asymmetry of the relationship in which it is part of or the vulnerabilities inherent, due to the subjective condition or circumstance, of the interested parties. Data protection regulations regulate (and will continue to do so even after the AI Act) the fulcrum of artificial intelligence, having long since introduced some essential guarantees: from the principle of knowability to the prohibition of algorithmic discrimination; from a general principle of transparency, which imposes precise information obligations towards the user to a criterion of quality and accuracy of the data to be used, particularly relevant to avoid the biases inherent in training the algorithm on the basis of inaccurate or insufficiently sufficient information representative. Precisely these principles - as indispensable prerequisites for preventing the harmful implications of artificial intelligence for individuals and the community - are the cornerstones around which the AI Act is developed, which underlies an important choice, not only in regulatory terms, but also and above all political and axiological, expressing the need to remodulate the perimeter of the technically possible on the basis of what is considered legally and ethically acceptable. In this scenario, the different approach followed by other legal experiences continues to be seen; if the United States has in fact established "minimum standards", abandoning, at least in part, the so-called. self-regulation by operators in the sector, a predominantly domestic perspective still seems to shine through, mostly concentrated on issues of national security, cybersecurity and consumer protection in the various fields in which AI is destined to be used, almost neglecting all the most universal dimension linked to the protection of human rights. The Chinese general plan itself configures a formally "agile" regulatory approach, making extensive use of the dynamic rules of the market, but in fact substantially "strong" as a rigid model of centralized regulation to protect security and general national interests.
GOVERNANCE AND INSTITUTIONS
Lanzara Olindo
2025
Abstract
The governance system of artificial intelligence. Artificial intelligence has become part of our daily life horizon and will increasingly become a constitutive element. It transcends the sole profile of the enormous advantages that can easily be found to involve values more closely connected to the freedoms and fundamental rights of the person that can be compromised, such as the safety of individuals, their health, privacy and the protection of personal data, integrity, human dignity, self-determination and non-discrimination. There is no doubt that the AI Act represents, together with what the GDPR was eight years ago, Europe's most advanced attempt to outline an anthropocentric strategy for the governance of technology, which today increasingly loses its instrumental character to rise to an end in itself; it does not limit itself to proposing solutions, but poses new problems and undermines axiological coordinates, redesigning the geography of power and its system of checks and balances. The objective is to promote the spread of anthropocentric AI, reliable and compliant with European values aimed at protecting people and, at the same time, supporting innovation and improving the functioning of the internal market. The new Regulation, in outlining the governance system of artificial intelligence, establishes a specific reserve of competence in favor of the data protection authorities, in particular in sectors (immigration, law enforcement activities, justice, democratic processes) in which the power algorithmic risks amplifying the structural asymmetry of the relationship in which it is part of or the vulnerabilities inherent, due to the subjective condition or circumstance, of the interested parties. Data protection regulations regulate (and will continue to do so even after the AI Act) the fulcrum of artificial intelligence, having long since introduced some essential guarantees: from the principle of knowability to the prohibition of algorithmic discrimination; from a general principle of transparency, which imposes precise information obligations towards the user to a criterion of quality and accuracy of the data to be used, particularly relevant to avoid the biases inherent in training the algorithm on the basis of inaccurate or insufficiently sufficient information representative. Precisely these principles - as indispensable prerequisites for preventing the harmful implications of artificial intelligence for individuals and the community - are the cornerstones around which the AI Act is developed, which underlies an important choice, not only in regulatory terms, but also and above all political and axiological, expressing the need to remodulate the perimeter of the technically possible on the basis of what is considered legally and ethically acceptable. In this scenario, the different approach followed by other legal experiences continues to be seen; if the United States has in fact established "minimum standards", abandoning, at least in part, the so-called. self-regulation by operators in the sector, a predominantly domestic perspective still seems to shine through, mostly concentrated on issues of national security, cybersecurity and consumer protection in the various fields in which AI is destined to be used, almost neglecting all the most universal dimension linked to the protection of human rights. The Chinese general plan itself configures a formally "agile" regulatory approach, making extensive use of the dynamic rules of the market, but in fact substantially "strong" as a rigid model of centralized regulation to protect security and general national interests.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.