Skip Navigation
Skip to contents

Science Editing : Science Editing

OPEN ACCESS
SEARCH
Search

Articles

Page Path
HOME > Sci Ed > Volume 11(2); 2024 > Article
Announcement
Shaping Europe’s digital future: AI Act (secondary publication)
European Commision
Science Editing 2024;11(2):168-171.
DOI: https://doi.org/10.6087/kcse.341
Published online: August 20, 2024
Correspondence to European Commision Eric.Mamer@ec.europa.eu
The article was first published by the European Commission, available at https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai. This is a secondary publication aimed at propagating the spirit of this special declaration, with permission from the European Commission.
• Received: June 5, 2024   • Accepted: June 5, 2024

Copyright © 2024 Korean Council of Science Editors

This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

prev
  • 417 Views
  • 26 Download
The AI Act is the first-ever legal framework on artificial intelligence (AI), which addresses the risks of AI and positions Europe to play a leading role globally.
The AI Act aims to provide AI developers and deployers with clear requirements and obligations regarding specific uses of AI. At the same time, the regulation seeks to reduce administrative and financial burdens for business, in particular small and medium-sized enterprises.
The AI Act is part of a wider package of policy measures to support the development of trustworthy AI, which also includes the AI Innovation Package [1] and the Coordinated Plan on AI [2]. Together, these measures will guarantee the safety and fundamental rights of people and businesses when it comes to AI. They will also strengthen uptake, investment and innovation in AI across the European Union (EU).
The AI Act is the first-ever comprehensive legal framework on AI worldwide. The aim of the new rules is to foster trustworthy AI in Europe and beyond, by ensuring that AI systems respect fundamental rights, safety, and ethical principles and by addressing risks of very powerful and impactful AI models.
The AI Act ensures that Europeans can trust what AI has to offer. While most AI systems pose limited to no risk and can contribute to solving many societal challenges, certain AI systems create risks that we must address to avoid undesirable outcomes.
For example, it is often not possible to find out why an AI system has made a decision or prediction and taken a particular action. So, it may become difficult to assess whether someone has been unfairly disadvantaged, such as in a hiring decision or in an application for a public benefit scheme.
Although existing legislation provides some protection, it is insufficient to address the specific challenges AI systems may bring. Thus, the proposed new rules are the following:
• Address risks specifically created by AI applications.
• Prohibit AI practices that pose unacceptable risks.
• Determine a list of high-risk applications.
• Set clear requirements for AI systems for high-risk applications.
• Define specific obligations deployers and providers of high-risk AI applications.
• Require a conformity assessment before a given AI system is put into service or placed on the market.
• Put enforcement in place after a given AI system is placed into the market.
• Establish a governance structure at European and national level.
The regulatory framework defines four levels of risk for AI systems (Fig. 1). All AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned, from social scoring by governments to toys using voice assistance that encourages dangerous behavior.
AI systems identified as high-risk include AI technology used in the following:
• Critical infrastructures (e.g., transport), that could put the life and health of citizens at risk.
• Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g., scoring of exams).
• Safety components of products (e.g., AI application in robot-assisted surgery).
• Employment, management of workers, and access to self-employment (e.g., CV-sorting software for recruitment procedures).
• Essential private and public services (e.g., credit scoring denying citizens opportunity to obtain a loan).
• Law enforcement that may interfere with people’s fundamental rights (e.g., evaluation of the reliability of evidence).
• Migration, asylum, and border control management (e.g., automated examination of visa applications).
• Administration of justice and democratic processes (e.g., AI solutions to search for court rulings). High-risk AI systems will be subject to the following strict obligations before they can be put on the market:
• Adequate risk assessment and mitigation systems.
• High quality of the datasets feeding the system to minimize risks and discriminatory outcomes.
• Logging of activity to ensure traceability of results.
• Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance.
• Clear and adequate information to the deployer.
• Appropriate human oversight measures to minimize risk.
• High level of robustness, security, and accuracy.
All remote biometric identification systems are considered high-risk and subject to strict requirements. The use of remote biometric identification in publicly accessible spaces for law enforcement purposes is, in principle, prohibited.
Narrow exceptions are strictly defined and regulated, such as when necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offence.
Those usages are subject to authorization by a judicial or other independent body and to appropriate limits in time, geographic reach and the data bases searched.
Limited risk refers to the risks associated with lack of transparency in AI usage. The AI Act introduces specific transparency obligations to ensure that humans are informed when necessary, fostering trust. For instance, when using AI systems such as chatbots, humans should be made aware that they are interacting with a machine so they can take an informed decision to continue or step back. Providers will also have to ensure that AI-generated content is identifiable. Besides, AI-generated text published with the purpose to inform the public on matters of public interest must be labelled as artificially generated. This also applies to audio and video content constituting deep fakes.
The AI Act allows the free use of minimal-risk AI. This includes applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category.
Once an AI system is on the market, authorities are in charge of market surveillance, deployers ensure human oversight and monitoring, and providers have a post-market monitoring system in place (Fig. 2). Providers and deployers will also report serious incidents and malfunctioning.
More and more, general-purpose AI models are becoming components of AI systems. These models can perform and adapt countless different tasks.
While general-purpose AI models can enable better and more powerful AI solutions, it is difficult to oversee all capabilities.
There, the AI Act introduces transparency obligations for all general-purpose AI models to enable a better understanding of these models and additional risk management obligations for very capable and impactful models. These additional obligations include self-assessment and mitigation of systemic risks, reporting of serious incidents, conducting test and model evaluations, as well as cybersecurity requirements.
As AI is a fast-evolving technology, the proposal has a future-proof approach, allowing rules to adapt to technological change. AI applications should remain trustworthy even after they have been placed on the market. This requires ongoing quality and risk management by providers.
The European AI Office, established in February 2024 within the European Commission, oversees the AI Act’s enforcement and implementation with the member states [3]. It aims to create an environment where AI technologies respect human dignity, rights, and trust. It also fosters collaboration, innovation, and research in AI among various stakeholders. Moreover, it engages in international dialogue and cooperation on AI issues, acknowledging the need for global alignment on AI governance. Through these efforts, the European AI Office strives to position Europe as a leader in the ethical and sustainable development of AI technologies.
In December 2023, the European Parliament and the Council of the EU reached a political agreement on the AI Act. The text is in the process of being formally adopted and translated. The AI Act will enter into force 20 days after its publication in the Official Journal, and will be fully applicable 2 years later, with some exceptions: prohibitions will take effect after 6 months, the governance rules and the obligations for general-purpose AI models become applicable after 12 months and the rules for AI systems—embedded into regulated products— will apply after 36 months. To facilitate the transition to the new regulatory framework, the European Commission has launched the AI Pact [4], a voluntary initiative that seeks to support the future implementation and invites AI developers from Europe and beyond to comply with the key obligations of the AI Act ahead of time.

Conflict of Interest

No potential conflict of interest relevant to this article was reported.

Funding

The author received no financial support for this work.

Data Availability

Data sharing is not applicable to this article as no new data were created or analyzed.

The author did not provide any supplementary materials for this article.
Fig. 1.
The regulatory framework. AI, artificial intelligence. Reprinted with the permission of the European Commission.
kcse-341f1.jpg
Fig. 2.
How does it all work in practice for providers of high-risk artificial intelligence (AI) systems? EU, European Union; CE, European conformity. Reprinted with the permission of the European Commission.
kcse-341f2.jpg

Figure & Data

References

    Citations

    Citations to this article as recorded by  

      Figure
      • 0
      • 1
      Shaping Europe’s digital future: AI Act (secondary publication)
      Image Image
      Fig. 1. The regulatory framework. AI, artificial intelligence. Reprinted with the permission of the European Commission.
      Fig. 2. How does it all work in practice for providers of high-risk artificial intelligence (AI) systems? EU, European Union; CE, European conformity. Reprinted with the permission of the European Commission.
      Shaping Europe’s digital future: AI Act (secondary publication)

      Science Editing : Science Editing
      TOP