A new era for AI: Europe adopts a pioneering law
- celinegainet
- Apr 1, 2024
- 10 min read
On March 13, 2024, the European Parliament approved the AI Regulation (the "AI Regulation" or AI Act), aimed at establishing a safe, transparent and fair framework for the development and use of AI. The legislation was approved by a large majority of 523 votes to 46, reflecting a strong consensus on the need to regulate this sector. It marks the end of the "Wild West" era for the AI industry in Europe, and sets a precedent for other countries wishing to implement similar rules.
Public opinion and industry experts are divided over this legislation. On the one hand, some fear that these regulations will hinder technological progress and make it more difficult to achieve general artificial intelligence. On the other, some consider that these measures do not go far enough in imposing stricter protections against threats such as disinformation.
1. Several levels of risk
The legislation introduces a classification system for AI products based on their anticipated threat level. Applications are classified into four risk categories: unacceptable, high, limited and minimal, as well as an identification of risks specific to general purpose models.
+ SYSTEMIC RISK
Minimal risk :
The vast majority of AI systems currently used or likely to be used in the EU fall into this category. These AI systems can be developed and used subject to the existing legislation without additional legal obligations. Examples include spam filters and inventory management systems.
Voluntarily, providers of those systems may choose to adhere to voluntary codes of conduct.
Limited / Specific Transparency risk:
This category refers to AI systems that have to meet specific transparency obligations, when there is, for example, a clearly identified risk of manipulation. This could be the case of a person interacting with a chatbot who needs to be informed that they are interacting with a machine, so that they can decide whether to continue or ask to speak to a human instead.
High-risk:
This category includes a limited number of AI systems defined in the legislation, potentially creating an adverse impact on people's safety or their fundamental rights (as protected by the EU Charter of Fundamental Rights). Annexed to the Act is the list of high-risk AI systems, which can be reviewed to align with the evolution of AI use cases.
This category encompasses applications related to transportation, education, employment and welfare, among others. Before placing a high-risk AI system on the market or in service in the EU, companies must carry out a prior conformity assessment and meet a long list of requirements to guarantee the system's safety. In addition, the European Commission will create and maintain a publicly accessible database where suppliers will be obliged to provide information on their high-risk AI systems, ensuring transparency for all stakeholders.
Unacceptable risk:
this category includes a very limited set of particularly harmful uses of AI that contravene EU values because they violate fundamental rights and will therefore be banned:
Social scoring for public and private purposes,
Exploitation of vulnerabilities of persons,
Use of subliminal techniques,
Real-time remote biometric identification in publicly accessible spaces by law enforcement, subject to narrow exceptions,
Biometric categorisation of natural persons based on biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs or sexual orientation. Filtering of datasets based on biometric data in the area of law enforcement will still be possible,
Individual predictive policing,
Emotion recognition in the workplace and education institutions, unless for medical or safety reasons (i.e. monitoring the tiredness levels of a pilot),
Untargeted scraping of internet or CCTV for facial images to build-up or expand databases.
Systemic Risks:
In addition, the AI Act considers systemic risks which could arise from general-purpose AI models, including large generative AI models. These can be used for a variety of tasks and are becoming the basis for many AI systems in the EU. Some of these models could carry systemic risks if they are very capable or widely used. For example, powerful models could cause serious accidents or be misused for far-reaching cyberattacks. Many individuals could be affected if a model propagates harmful biases across many applications
Generative AI developers such as OpenAI and Google will be required to provide documents and data explaining how their models work, and to comply with European copyright laws when forming their LLMs (large-scale language models).
2. To whom does the AI Act apply?
The legal framework will apply to both public and private actors inside and outside the EU as long as the AI system is placed on the Union market or its use affects people located in the EU. The law therefore has an extensive extraterritorial scope, applying to suppliers, users, importers and distributors, with the possibility of imposing significant fines in the event of non-compliance.
It can concern:
providers (e.g. a developer of a CV-screening tool) and
deployers of concerned AI systems (e.g. a bank buying this screening toolImporters of AI systems: if it is considered high-risk AI, it will also have to ensure that the foreign provider has already carried out the appropriate conformity assessment procedure, bears a European Conformity (CE) marking and is accompanied by the required documentation and instructions of use).
In addition, certain obligations are foreseen for providers of general-purpose AI models, including large generative AI models.
Providers of free and open-source models are exempted from most of these obligations. This exemption does not cover obligations for providers of general purpose AI models with systemic risks.
Obligations also do not apply to research, development and prototyping activities preceding the release on the market.
Also the regulation does not apply to AI systems that are exclusively for military, defence or national security purposes, regardless of the type of entity carrying out those activities.
3. Focus on High-Risk AI
How to determine whether an AI system is high-risk?
Together with a clear definition of 'high-risk', the Act sets out a methodology that helps identify high-risk AI systems within the legal framework. The risk classification is based on the intended purpose of the AI system, in line with the existing EU product safety legislation. It means that the classification of the risk depends on the function performed by the AI system and on the specific purpose and modalities for which the system is used.
Annexed to the Act is a list of use cases which are considered to be high-risk:
Certain critical infrastructures for instance in the fields of road traffic and the supply of water, gas, heating and electricity,
Education and vocational training, e.g. to evaluate learning outcomes and steer the learning process and monitoring of cheating,
Employment, workers management and access to self-employment, e.g. to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates,
Access to essential private and public services and benefits (e.g. healthcare), creditworthiness evaluation of natural persons, and risk assessment and pricing in relation to life and health insurance,
Certain systems used in the fields of law enforcement, border control, administration of justice and democratic processes,
Evaluation and classification of emergency calls,
Biometric identification, categorisation and emotion recognition systems (outside the prohibited categories),
Recommender systems of very large online platforms are not included, as they are already covered in other legislation (DMA/DSA).
The Commission will ensure that this list is kept up to date and relevant. Systems on the high-risk list, that perform narrow procedural tasks, improve the result of previous human activities, do not influence human decisions or do purely preparatory tasks are not considered high-risk. However, an AI system shall always be considered high-risk if it performs profiling of natural persons.
What are the obligations for providers of high-risk AI systems?
Before placing a high-risk AI system on the EU market or otherwise putting it into service, providers must subject it to a conformity assessment. This will allow them to demonstrate that their system complies with the mandatory requirements for trustworthy AI (e.g. data quality, documentation and traceability, transparency, human oversight, accuracy, cybersecurity and robustness). This assessment has to be repeated if the system or its purpose are substantially modified.
When AI systems are safety components of essential products regulated by specific EU regulations, and they require third-party conformity assessment under the relevant specific EU regulation, they will always be deemed high-risk. This applies, for example, to products such as toys, elevators and medical equipment.
Also, for biometric systems a third-party conformity assessment is always required.
Providers of high-risk AI systems will also have to implement quality and risk management systems to ensure their compliance with the new requirements and minimize risks for users and affected persons, even after a product is placed on the market.
High-risk AI systems that are deployed by public authorities or entities acting on their behalf will have to be registered in a public EU database, unless those systems are used for law enforcement and migration. The latter will have to be registered in a non-public part of the database that will be only accessible to relevant supervisory authorities.
Market surveillance authorities will support post-market monitoring through audits and by offering providers the possibility to report on serious incidents or breaches of fundamental rights obligations of which they have become aware. Any market surveillance authority may authorise placing on the market of specific high-risk AI for exceptional reasons.
In case of a breach, the requirements will allow national authorities to have access to the information needed to investigate whether the use of the AI system complied with the law.
4. Implications for companies
For companies operating in the AI field or considering entering the European market, this legislation highlights the importance of:
Assess the risk level of their AI products and systems, and prepare for a possible regulatory assessment for those classified as high-risk.
Document and disclose how their models work, especially for generative AI developers, to ensure transparency and compliance with regulatory requirements.
Adapt development strategies to include compliance with EU copyright protection standards in the formation of their models.
This legislation represents a significant paradigm shift, signaling a shift to an era where security, transparency and accountability become paramount in the development and implementation of AI in Europe. Companies need to carefully navigate this new regulatory landscape to ensure compliance and take advantage of opportunities in a more structured and secure environment.
To accelerate the transition to the new regulatory framework, the Commission has launched the AI Pact, a voluntary initiative that seeks to support the future implementation and invites AI developers from Europe and beyond to comply with the key obligations of the AI Act ahead of time.
A compliance checker too is also offered here:
5. Fines
The Act provides potentially significant fines for violating its terms:
Placing a prohibited system on the market. The potential fine for violating Article 5 prohibitions for AI systems is the larger of 35 million euros or 7% of worldwide annual turnover,
Violation of GPAI obligations or non-compliance with enforcement measures (such as requests for information). The potential fine for these violations is 15 million euros or 3% of worldwide annual turnover for most breaches relating to high-risk AI systems, GPAI and foundation models arising from obligations imposed on providers, importers, distributors, and deployers,
Providing incorrect, incomplete, or misleading information to regulators. The maximum fine for this violation is 7.5 million euros or 1% of worldwide annual turnover.
6. Entry into force
Application of this law is likely to be complex and fragmented.
The AI Act publication into the EU Official Journal is expected to be in May. It will enter into force 20 days after its publication, and will then be fully applicable 2 years later, with some exceptions as detailed in the timeline below.
Compliance deadlines
By 6 months after entry into force:
Prohibitions on unacceptable risk AI. (Article 85)
By 9 months after entry into force:
Codes of practice for General Purpose AI (GPAI) must be finalized (Article 85)
By 12 months after entry into force:
GPAI rules apply. (Article 85)
Appointment of Member State competent authorities. (Article 59)
Annual Commission review and possible amendments on prohibitions. (Article 84)
By 18 months after entry into force:
Commission issues implementing acts creating a template for high-risk AI providers' post-market monitoring plan. (Article 6)
By 24 months after entry into force:
Obligations on high-risk AI systems specifically listed in Annex III, which includes AI systems in biometrics, critical infrastructure, education, employment, access to essential public services, law enforcement, immigration, and administration of justice now apply. (Article 83)
Member states to have implemented rules on penalties, including administrative fines. (Article 53)
Member state authorities to have established at least one operational AI regulatory sandbox. (Article 53)
Commission review, and possible amendment of, the list of high-risk AI systems (Article 84)
By 36 months after entry into force:
Obligations on Annex II high-risk AI systems apply. (Article 85)
Obligations for high-risk AI systems that are not prescribed in Annex III but are intended to be used as a safety component of a product, or the AI is itself a product, and the product is required to undergo a third-party conformity assessment under existing specific EU laws, for example, toys, radio equipment, in vitro diagnostic medical devices, civil aviation security and agricultural vehicles. (Article 85)
By the end of 2030:
Obligations go into effect for certain AI systems that are components of the large-scale IT systems established by EU law in the areas of freedom, security and justice, such as the Schengen Information System (Article 83).
Secondary legislation
The Commission can introduce delegated acts on:
Definition of AI system (Article 82a)
Criteria that exempt AI systems from high risk rules (Article 6)
High risk AI use cases (Article 7)
Thresholds classifying General Purpose AI models as systemic (Article 52a)
Technical documentation requirements for high risk AI systems and GPAI. (Article 11)
Conformity assessments. (Article 43)
EU declaration of conformity (Article 48)
The Commission's power to issue delegated acts lasts for an initial and extendable period of five years (Article 73).
The AI Office is to draw up codes of practice to cover, but not necessarily limited to, obligations for providers of general purpose AI models. Codes of practice should be ready nine months after entry into force at the latest and should provide at least a three-month period before taking effect. (Article 73)
The Commission can introduce implementing acts on:
Approving codes of practice for GPAI and generative AI watermarking (Article 52e)
Establishing the scientific panel of independent experts. (Article 58b)
Conditions for AI Office evaluations of GPAI compliance. (Article 68j)
Operational rules for AI regulatory sandboxes (Article 53)
Information in real world testing plans. (Article 54a)
Common specifications (where standards do not cover rules). (Article 41)
Commission guidelines
The Commission can provide guidance on:
By 12 months after entry into force: High risk AI serious incident reporting (Article 62)
By 18 months after entry into force: Practical guidance on determining if an AI system is high risk, with list of practical examples of high-risk and non-high risk use cases (Article 6).
With no specific timeline, the Commission will provide guidelines on: (Article 82a)
The application of the definition of an AI system.
High risk AI provider requirements.
Prohibitions.
Substantial modifications.
Transparency disclosures to end-users.
Detailed information on the relationship between the AI Act and other EU laws.
The Commission is to report on its delegated powers no later than nine months and before five years after entry into force. (Article 84)
7. Sources
Disclaimer
This article is for informational purposes only and is not intended to be a comprehensive analysis of the European AI Regulation. The views and opinions expressed herein are those of the author. No legal liability or other responsibility is accepted for any errors, omissions, or statements made within this article. Readers should not rely solely on the information provided in this article for making legal or other significant decisions but should consult a lawyer or as the case may be, an appropriate professional for specific advice tailored to their situation.
Comentarios