The European Union's AI Act (Regulation (EU) 2024/1689) introduces a legal framework to regulate artificial intelligence systems across Europe. The AI Act establishes harmonized rules for developing, deploying, and using AI systems to ensure that these technologies are safe, transparent, and respectful of fundamental rights.
The regulation takes a risk-based approach, classifying AI systems into four categories: unacceptable risk (prohibited systems), high-risk, limited-risk, and minimal-risk systems. Each classification comes with its own set of obligations, with the most stringent requirements applied to high-risk systems that can significantly affect people's lives, such as those used in critical infrastructure, law enforcement, or education.
Implementation Timeline
2024
12 July 2024: The AI Act is officially published in the Journal of the European Union.
1 August 2024: The AI Act enters into force, but its requirements won't apply immediately—they will gradually be phased in over time.
2 November 2024: Member States are required to identify and publicly list the authorities responsible for fundamental rights protection.
2025
2 February 2025: This is the date when prohibitions on certain AI systems begin to apply, as outlined in Chapters 1 and 2 of the Act. These prohibitions concern the use of AI systems that are deemed to pose unacceptable risks to fundamental rights and safety (prohibited AI practices are described below).
2 May 2025: By this date, the European Commission must ensure that codes of practice are ready. These codes are expected to provide guidance on complying with various parts of the AI Act, specifically ensuring that AI providers and other stakeholders adhere to the required standards.
2 August 2025: Several critical provisions take effect on this date:
Notified bodies, General Purpose AI (GPAI) models, and governance rules (Chapter III, Section 4; Chapter V; Chapter VII) begin to apply.
Provisions around confidentiality (Article 78) and penalties (Articles 99 and 100) also start.
Providers of GPAI models, which were placed on the market before this date, must comply with the AI Act by 2 August 2027.
Member States are required to submit their first reports on the financial and human resources of their national competent authorities by this date, and every two years thereafter.
Member States must designate national competent authorities responsible for the oversight of AI systems, such as notifying and market surveillance authorities, and make their contact details publicly available.
Member States are also expected to establish and implement rules on penalties and fines related to violations of the AI Act.
If the codes of practice have not been finalized or are found inadequate by the AI Office, the European Commission can establish common rules for the implementation of obligations for providers of GPAI models via implementing acts.
The European Commission will also begin its annual review of prohibitions and may amend them if necessary.
2026
2 February 2026: The European Commission is required to provide guidelines that specify the practical implementation of Article 6, which relates to the classification of prohibited AI systems.
2 August 2026:
The majority of the EU AI Act's provisions will begin to apply to AI systems across the EU, with the exception of Article 6(1); it states that an AI system is classified as high-risk if it serves a critical safety function within a product, and that product must undergo external assessment to verify its compliance with safety regulations.
Operators of high-risk AI systems (other than those covered under Article 111(1)) must comply with the AI Act if their systems were placed on the market or put into service before this date.
Member States are required to ensure that they have at least one AI regulatory sandbox operational at the national level by this date.
2027
August 2, 2027:
The obligations outlined in Article 6(1) of the AI Act start to apply.
Providers of General Purpose AI (GPAI) models, which were placed on the market or put into service before August 2, 2025, are required to fully comply with the obligations laid out in the EU AI Act by this date.
AI systems that are components of large-scale IT systems, listed in Annex X of the AI Act, and were placed on the market or put into service before August 2, 2027, must also comply with the AI Act by this date. However, they are given until December 31, 2030, for full compliance.
2028
2 August 2028: The European Commission is tasked with evaluating the functioning of the AI Office.
2 August 2028 (and every three years thereafter): The Commission will evaluate the impact and effectiveness of voluntary codes of conduct related to AI systems. These evaluations will help determine if further regulatory action is needed for voluntary codes to align with the goals of the Act.
2 August 2028 (and every four years thereafter): The Commission must evaluate and report on the necessity for amendments in several critical areas:
Annex III: This annex lists the AI systems that require additional transparency measures.
Article 50: Pertains to transparency requirements for certain AI systems.
Supervision and Governance Systems: The governance and supervision mechanisms are reviewed for potential adjustments or improvements.
2 August 2028 (and every four years thereafter): A report will be submitted to the European Parliament and the Council regarding the energy-efficient development of general-purpose AI models. This report aims to ensure that AI models are designed in a sustainable manner.
1 December 2028 (nine months prior to August 2029): By this date, the Commission must produce a report on the delegation of powers, as specified in Article 97 of the Act.
2029
1 August 2029: The Commission’s powers to adopt delegated acts (as defined in various Articles such as 6, 7, 11, 43, 47, 51, 52, 53) will expire unless extended by the European Parliament or the Council. The default is for these powers to be extended for recurring five-year periods unless opposed.
2 August 2029 (and every four years thereafter): The Commission is required to submit a report on the evaluation and review of the AI Act to the European Parliament and the Council.
Beyond 2029
2 August 2030: Providers and deployers of high-risk AI systems that are intended for use by public authorities must comply with the obligations and requirements of the Act by this date.
31 December 2030: AI systems that are components of large-scale IT systems (as listed in Annex X) and were placed on the market before 2 August 2027 must be brought into compliance with the Act by this date.
2 August 2031: The Commission will assess the enforcement of the AI Act and submit a report to the European Parliament, the Council, and the European Economic and Social Committee.
Purpose of the Regulation
Harmonized Rules for AI Systems: The regulation establishes uniform rules for the marketing, operation, and use of AI systems across the EU.
Prohibited AI Practices: The Act outright bans certain AI practices deemed unacceptable, such as manipulative or harmful AI.
High-Risk AI Systems: Special provisions apply to AI systems classified as "high-risk," imposing stricter requirements on their development and deployment to mitigate potential harm.
Transparency Requirements: The regulation mandates clear transparency rules for AI systems, particularly those that could significantly affect individuals, such as AI that interacts with humans or collects sensitive data.
General-Purpose AI Models: The Act also covers general-purpose AI models, ensuring their safe placement in the market.
Market Monitoring and Enforcement: The regulation sets out how AI systems will be monitored and regulated.
Innovation Support: The Act specifically includes measures to foster innovation, with a focus on small and medium-sized enterprises (SMEs) and start-ups.
Entities Covered
Providers of AI systems: This regulation applies to any individual or company that sells or puts AI systems into service within the EU, regardless of whether it is based in the EU or outside it (third countries).
Deployers of AI systems: These are entities using AI systems that are based in or located within the EU.
Third-country providers and deployers: The regulation applies even if AI systems are deployed or provided from outside the EU if their output is used within the EU.
Importers and distributors: Entities involved in importing or distributing AI systems within the EU.
Manufacturers: Companies that integrate AI systems into their products and market them under their own name or trademark.
Authorized representatives: If a provider outside the EU is not established in the Union, the authorized representatives who act on their behalf within the EU must comply with the regulation.
Affected persons: The regulation includes protections for individuals in the EU affected by AI systems.
An AI system is any machine-based system that operates with varying levels of autonomy. It is capable of receiving inputs, processing data, and producing outputs, which could be predictions, decisions, recommendations, or content. The key feature of AI systems is their ability to influence physical or virtual environments and adapt post-deployment.
A general-purpose AI model is an AI model that is capable of performing a wide range of tasks and is typically trained on a large amount of data. These models exhibit a high degree of adaptability and can be integrated into various downstream applications or systems. A general-purpose AI system is based on a general-purpose AI model but serves multiple purposes. It can be used directly by end-users or integrated into other AI systems for diverse applications. This term captures AI systems with broader utility beyond a single, specialized function.
A provider refers to any individual or entity (such as a public authority, agency, or legal body) that develops or has developed, an AI system or general-purpose AI model. The provider places this AI system or model on the market under their own name or trademark. Importantly, this applies whether the AI system is offered for payment or free of charge.
A deployer is any person or organization that uses an AI system under their control. Deployers are different from providers, as they are responsible for using the AI system rather than creating or placing it on the market.
Exemptions
AI systems related to national security, defense, or military purposes are exempt from the regulation. This applies to systems regardless of the type of entity (public or private) developing or using the AI system.
This exemption covers public authorities or international organizations using AI systems in law enforcement or judicial cooperation with the EU or its member states. However, such entities must ensure they offer adequate safeguards for the protection of individual rights and freedoms.
The regulation does not affect the liability provisions related to intermediary service providers as outlined in Chapter II of Regulation (EU) 2022/2065 (the Digital Services Act). These providers typically act as platforms or hosts for third-party content or services.
AI systems developed and used solely for scientific research and development are not subject to the regulation. However, this exclusion applies only if these systems are not marketed or used for commercial purposes within the Union. Real-world testing, though, is not covered by this exclusion, meaning such systems would still need to comply with relevant rules if tested in live environments.
AI systems or models in the research, testing, or development stages are not covered by the regulation unless they are placed on the market or put into service. However, testing AI in real-world conditions would require compliance with the Act.
Natural persons using AI systems purely for personal and non-professional purposes (e.g., using AI tools at home) are exempt from the regulation.
AI systems released under free and open-source licenses are generally excluded unless they are placed on the market or deemed high-risk AI systems (or systems falling under Articles 5 or 50, which cover prohibited AI practices and transparency requirements).
Prohibited AI Practices
Subliminal or Manipulative Techniques: AI systems cannot be used if they deploy subliminal techniques (i.e., beyond an individual’s conscious awareness) to distort behavior in a way that significantly impairs a person’s ability to make informed decisions. This could involve psychological manipulation or deception, leading individuals or groups to make decisions they wouldn’t have otherwise made, causing harm. The prohibition covers both the intent and the effect of such techniques, especially if they cause significant harm, either physical, emotional, or financial.
Exploitation of Vulnerabilities: AI systems are banned if they exploit the vulnerabilities of certain people or groups based on factors like age, disability, or social and economic circumstances. For instance, using AI to take advantage of an elderly person’s possible cognitive decline to push them into harmful decisions would fall under this prohibition. This applies to any situation where the AI distorts behavior and leads to significant harm.
Social Scoring: AI systems used to evaluate or classify individuals based on their social behavior or personal characteristics over time, similar to a "social credit" system, are prohibited. Such systems cannot lead to unfair or detrimental treatment of people based on their social behavior, especially if the treatment is unrelated to the context in which the data was collected or if it is disproportionate to the actual behavior.
Unrelated detrimental treatment: If someone is treated unfairly based on social behavior from one context being applied to another unrelated context (e.g., being denied a service based on past behavior in a different setting).
Disproportionate treatment: If the treatment is excessively negative or unfair given the nature of the behavior being evaluated.
Predicting Criminal Behavior: AI systems cannot be used solely to predict the risk of someone committing a crime based on profiling or personality traits. However, AI can be used to support human decision-making in criminal investigations as long as it is based on objective facts tied directly to the crime rather than assumptions from personality traits.
Facial Recognition Database Creation: AI systems that build or expand facial recognition databases by scraping images from the Internet or CCTV without consent are prohibited. This also applies to the collection of facial images without clear, targeted authorization, especially for law enforcement or surveillance purposes.
Emotion Inference in Sensitive Contexts: AI systems that infer emotions in workplaces or educational settings are generally prohibited unless they serve a medical or safety purpose.
Biometric Categorization for Sensitive Attributes: AI systems are prohibited from categorizing people based on biometric data (like facial features) to infer sensitive personal attributes such as race, political beliefs, or sexual orientation. However, this prohibition doesn’t apply to law enforcement uses where biometric data has been lawfully acquired, such as for filtering or categorizing within law enforcement datasets.
Real-Time Biometric Identification in Public Spaces: The use of AI for real-time remote biometric identification (such as facial recognition) in public spaces by law enforcement is generally prohibited, except in highly specific cases like:
(i) Searching for specific victims of crimes like abduction or trafficking.
(ii) Preventing imminent, severe threats like a terrorist attack.
(iii) Identifying or locating individuals involved in serious criminal offenses punishable by at least four years in prison.
The deployment of real-time biometric identification by law enforcement in public spaces must be strictly necessary and proportional to the harm that would occur without its use. It also requires an assessment of the consequences on individual rights and freedoms.
High-Risk AI Systems
Criteria
An AI system is classified as high-risk if both of the following conditions are met:
AI system as a safety component or product: The AI system is either:
A critical safety component of a product, or
A product itself is subject to Union harmonization legislation (laws that govern product safety in various sectors, such as medical devices or machinery).
Third-party conformity assessment: The product or AI system must undergo a third-party assessment to ensure it complies with safety standards. This process is mandatory before the product or system can be sold or used, as required by the harmonization legislation.
The AI Act also lists specific areas where AI systems are automatically considered high-risk due to their direct impact on people's lives and fundamental rights:
Biometrics:
AI systems used for remote biometric identification (e.g., facial recognition).
AI categorizing individuals based on sensitive attributes like race, gender, or political beliefs.
AI for emotion recognition.
Critical Infrastructure: AI systems involved in managing essential services (e.g., electricity, water, road traffic).
Education and Vocational Training: AI systems used in student admissions, evaluating learning outcomes, or monitoring student behavior.
Employment: AI systems for recruitment, performance evaluation, promotion, or monitoring employee behavior.
Access to Essential Services:
AI determining eligibility for public services (e.g., healthcare, social benefits).
AI assessing creditworthiness or evaluating life and health insurance risks.
AI prioritizing emergency service responses or triaging patients in healthcare.
Law Enforcement:
AI systems used in criminal investigations for risk assessments (e.g., predicting reoffending).
AI systems assessing evidence reliability or criminal profiling.
Migration and Border Control:
AI assessing security or health risks for individuals entering a country.
AI assisting in asylum or visa applications and evaluating eligibility for immigration services.
Judiciary and Elections:
AI used by courts to assist in legal decision-making or evidence interpretation.
AI systems influencing election outcomes or voting behavior.
Exceptions to High-Risk Classification
Certain AI systems listed above may not be classified as high-risk if they do not pose a significant risk of harm to health, safety, or fundamental rights. These exceptions apply if the AI system:
Performs narrow procedural tasks (specific, limited functions).
Improves the result of a previously completed human activity (assists but does not replace human decisions).
Detects decision-making patterns but does not influence or replace human judgments without proper review.
Performs preparatory tasks for assessments but does not directly influence decision-making.
Despite these exceptions, any AI system used for profiling individuals (assessing characteristics like personality traits or behavior) is always classified as high-risk.
If a provider believes an AI system listed in Annex III of the AI Act does not meet the high-risk criteria, they must document their assessment and be prepared to present this justification to relevant authorities.
Compliance Obligations for High-Risk AI Systems
Compliance should consider:
Intended Purpose: How the AI system is intended to be used.
State of the Art: Current technological standards and best practices in AI and related technologies.
Risk Management System: A risk management system must be in place as described below in detail.
The risk management system is defined as a continuous, iterative process that spans the entire lifecycle of the AI system. It involves regular reviews and updates to adapt to evolving risks and technological changes. The risk management system ensures that all potential risks associated with the AI system are identified, assessed, and mitigated throughout its lifecycle.
Steps Involved:
Identification and Analysis of Risks:
Known Risks: Risks that are already identified and understood.
Reasonably Foreseeable Risks: Potential risks that can be anticipated based on current knowledge and usage scenarios.
Focus Areas: Health, safety, and fundamental rights.
Estimation and Evaluation of Risks:
Assess the likelihood and impact of identified risks.
Consider both intended use and conditions of reasonably foreseeable misuse.
Evaluation of Other Risks:
Analyze additional risks using data from post-market monitoring systems (Article 72 of the reguation).
Ensure comprehensive risk assessment beyond initial identification.
Adoption of Risk Management Measures:
Implement targeted measures to address identified risks.
Ensure that these measures are appropriate and effective.
Risk management measures should:
Account for Interactions: How different requirements and measures interact with each other.
Minimize Risks Effectively: Achieve a balance that maximizes risk reduction while maintaining functionality.
After implementing risk management measures, the remaining risks (residual risks) must be:
Judged Acceptable: Based on established criteria and standards.
Overall Residual Risk: The cumulative risk from all residual hazards should remain within acceptable limits.
High-risk AI systems must undergo testing to identify appropriate risk management measures, ensure consistent performance for their intended purpose, and confirm compliance with the specified requirements. Testing procedures may include real-world conditions, providing a practical assessment of AI systems in environments that mimic actual usage scenarios and allowing for the identification of unforeseen risks that may not surface in controlled testing environments. Testing should be conducted throughout the development process and before the system is placed on the market or deployed.
When implementing the risk management system, providers must consider:
Impact on Minors: Potential adverse effects on individuals under 18.
Other Vulnerable Groups: Groups that may be more susceptible to harm due to specific characteristics or circumstances.
Data Governance Requirements for High-Risk AI Systems
High-risk AI systems that involve training AI models must be developed using training, validation, and testing data sets that meet specified quality standards. These data sets must follow strict guidelines to ensure the AI system functions accurately and safely.
Data governance refers to the set of policies and procedures governing how data is collected, processed, and managed. High-risk AI systems must have data management practices tailored to their specific purposes, including:
(a) Design Choices: The AI system's design choices must reflect best practices in data handling and governance.
(b) Data Collection and Origin: Clear documentation of how data is collected, its origin, and the original purpose of collection (especially for personal data).
(c) Data Preparation: Data processing steps like annotation, labelling, cleaning, and aggregation must be documented.
(d) Assumptions: The assumptions made about the data, especially regarding what it represents or measures, must be articulated.
(e) Data Suitability: The quantity, availability, and relevance of the data must be assessed to ensure it is appropriate for the AI system's purpose.
(f) Bias Assessment: Potential biases in the data that could impact safety or lead to discrimination must be thoroughly examined.
(g) Bias Mitigation: Measures must be taken to detect, prevent, and correct biases.
(h) Data Gaps: Any shortcomings or gaps in the data that could hinder regulatory compliance must be identified and addressed.
Data Quality Standards for High-Risk AI Systems
The data sets used for training, validation, and testing must meet several key criteria:
Relevance: The data must be applicable to the AI system’s intended purpose.
Representation: The data should be representative of the population or environment where the system will be deployed.
Accuracy: Data must be as free of errors as possible and complete.
Statistical Properties: Data must have suitable statistical characteristics, especially for systems intended to affect particular groups of people.
Data sets must reflect the geographical, contextual, behavioral, or functional settings in which the AI system is intended to operate. This ensures that the AI system performs as expected in its real-world context.
Handling Special Categories of Personal Data by High-Risk AI Systems
In cases where it is necessary to detect and correct biases, special categories of personal data (e.g., racial or ethnic origin, political opinions, health data) may be processed, but only under strict conditions, such as:
Necessity: No other data (e.g., anonymized or synthetic data) can achieve the same result.
Security Measures: The data must be protected with state-of-the-art security measures (e.g., pseudonymization, strict access controls).
Deletion: Special data must be deleted once biases are corrected or once the data retention period ends.
Documented Justification: A clear record must explain why processing special data was necessary, including reasons why alternative data couldn’t be used.
If an AI system doesn't rely on training data (e.g., rule-based systems), the governance and management practices described above apply only to testing data sets.
Technical Documentation for High-Risk AI Systems
Before placing a high-risk AI system on the market, providers must create technical documentation. This documentation must be continuously updated to demonstrate the system's compliance with the regulatory requirements. Small and medium-sized enterprises (SMEs) and start-ups may provide simplified versions of the required technical documentation.
Record-Keeping (Logging)
High-risk AI systems must allow for automatic event logging over the entire lifetime of the system. Logs provide an audit trail, helping to monitor the system’s performance and trace any malfunctions or issues. The logging capabilities should provide sufficient traceability to help in:
Risk Identification: Detecting situations where the system may pose a risk or where substantial modifications have been made.
Post-Market Monitoring: Supporting ongoing monitoring of the system once it is deployed.
Operational Monitoring: Ensuring the system operates as intended during its use.
For AI systems related to biometric identification (e.g., facial recognition), additional logging requirements apply:
Usage Period: Record the start and end times of each use of the system.
Reference Database: Document the database against which input data is checked.
Input Data: Log the data that resulted in a match during the system’s use.
Human Verification: Record the identity of any persons involved in verifying the system’s output, to ensure accountability and transparency.
Transparency and Provision of High-Risk AI Systems
High-risk AI systems must be designed to be transparent enough for deployers (those using the AI system) to understand and interpret the system’s outputs. The degree of transparency should align with the requirements and obligations of both the AI provider (the one who developed the system) and the deployer.
AI systems must come with clear, concise, and accurate instructions for use. These should be provided in a digital or suitable format and must be easy for deployers to understand. The instructions must cover the following key areas:
(a) Provider’s Information: The name and contact details of the provider and, if applicable, their authorized representative.
(b) Performance Characteristics:
Purpose: The intended use of the AI system.
Accuracy, Robustness, and Cybersecurity: The levels of accuracy and cybersecurity tested and validated, as well as any circumstances that might affect performance.
Risks: Any known risks to health, safety, or fundamental rights when the system is used as intended or under foreseeable misuse.
Output Explanation: Technical capabilities that allow deployers to understand and explain the system's outputs.
Performance with Specific Groups: How well the AI system performs for specific groups or individuals it is intended to serve.
Input Data: Information about the data used in training, validation, and testing, particularly if it is relevant to the system's performance.
Output Interpretation: Guidance on interpreting the system's output appropriately.
(c) Predetermined Changes: Any changes in the system’s performance or design anticipated by the provider.
(d) Human Oversight: Measures that help deployers interpret and monitor the system’s outputs effectively.
(e) Resources and Maintenance: Information on the necessary hardware, computational resources, and maintenance schedules, including software updates.
(f) Log Collection: Instructions on how to collect, store, and interpret logs.
Human Oversight of High-Risk AI Systems
High-risk AI systems must be designed with tools that allow natural persons (human operators) to oversee and intervene in the system's operation effectively.
The goal of human oversight is to minimize risks related to health, safety, and fundamental rights that might arise during the AI system's use, including risks that could occur despite the application of other regulatory safeguards.
Human oversight measures should match the risk level, the system’s autonomy, and its context of use. Oversight can be achieved through:
(a) Built-in Measures: Measures integrated directly into the AI system by the provider to facilitate human oversight.
(b) Deployable Measures: Measures that the provider specifies for the deployer to implement.
The AI system must be designed so that human operators can:
(a) Understand System Capabilities and Limitations: Operators should have a clear understanding of the system’s functions and be able to monitor its operation for anomalies or malfunctions.
(b) Be Aware of Automation Bias: Operators should avoid over-relying on AI outputs, particularly for high-stakes decisions (e.g., medical diagnoses or legal judgments).
(c) Interpret Outputs Correctly: The system should provide interpretation tools to help operators understand and assess the AI's output.
(d) Override the System: Operators must be able to disregard or reverse the AI’s output if necessary, ensuring that the system doesn’t make irreversible decisions without human intervention.
(e) Interrupt the System: Operators must have access to a ‘stop’ function that allows them to safely halt the system’s operation if required.
Special Oversight for Biometric Systems
For biometric AI systems, such as facial recognition, the Regulation requires additional verification steps:
Before any action or decision is made based on the AI system’s identification, at least two qualified humans must verify the identification separately.
Exceptions to this requirement apply in cases of law enforcement, migration, or border control where such a procedure is deemed disproportionate under national or EU law.
Accuracy, Robustness, and Cybersecurity of High-Risk AI Systems
High-risk AI systems must be designed to achieve and maintain high levels of accuracy, robustness, and cybersecurity throughout their lifecycle. The systems should be able to perform reliably under varying conditions and resist errors or faults. The accuracy and robustness of the system should be measurable. The system’s level of accuracy, along with relevant accuracy metrics, must be declared in the instructions for use.
AI systems must be resilient to errors, faults, or inconsistencies in their environment or interactions with humans or other systems. This can be achieved through measures like technical redundancy, where the system includes backup plans or failsafe mechanisms to ensure continuous operation.
For AI systems that continue to learn after deployment, safeguards must be in place to prevent feedback loops, where biased outputs could affect future inputs. Proper mitigation measures are required to avoid such biases.
High-risk AI systems must be resilient against attacks or attempts by unauthorized third parties to exploit vulnerabilities, alter outputs, or manipulate system performance. These attacks might include:
Data Poisoning: Attempts to corrupt the training data to alter the AI’s behavior.
Model Poisoning: Manipulating pre-trained models used by the AI.
Adversarial Examples: Feeding the system deceptive input designed to make it fail.
Confidentiality Attacks: Attempts to exploit weaknesses in the system’s data handling to access sensitive information.
Providers must implement measures to prevent, detect, and respond to these security risks, ensuring that the system remains secure and performs reliably throughout its lifecycle.
Obligations of Providers of High-Risk AI Systems
Compliance: Providers must ensure that their high-risk AI systems meet all regulatory requirements for safety, reliability, and ethical use.
Provider Information: Providers must clearly display their name, trade name or trademark, and contact details either on the AI system, its packaging, or in accompanying documentation.
Quality Management System (QMS), that should include:
A clear strategy for following regulations and handling assessments to prove compliance.
Methods for system design, control, and verification.
Procedures to test and validate the AI system throughout its development.
Clear technical specifications and standards to ensure the system functions as required.
Comprehensive data management processes for collecting, storing, and analyzing data used in the AI system.
A risk management system to identify and mitigate potential risks.
Systems to monitor the AI system after it’s released to the market, including reporting any serious incidents.
Systems for managing communication with authorities, clients, and other stakeholders.
Efficient documentation retention and resource management, including strategies to ensure continuity in the supply chain.
A responsibility framework, clearly defining who is accountable within the organization.
Documentation Keeping: Providers must retain essential documentation for 10 years after the AI system is made available. This includes technical details, quality management records, and any changes approved by regulatory bodies. If the provider goes out of business, arrangements must be made to keep this documentation accessible to authorities.
Log Keeping: Providers must keep automatically generated logs from their AI systems for at least six months, or longer if required.
Conformity Assessment: Before placing the AI system on the market or putting it into use, it must undergo a conformity assessment to ensure it meets the necessary legal and regulatory standards.
EU Declaration of Conformity: Providers must create a declaration of conformity, confirming that the AI system complies with the relevant EU rules and standards.
CE Marking: Providers must affix the CE marking on the AI system or its packaging. The CE mark shows that the system conforms to EU safety and performance regulations.
Registration of AI System: Providers must register the AI system in the EU database before offering it in the market.
Corrective Actions: If the AI system is found to be non-compliant or poses any risks, providers must take corrective actions immediately. This could involve fixing, recalling, or disabling the system. They must also notify distributors, clients, and relevant parties about the issue and the actions taken.
Cooperation with Authorities: Providers must fully cooperate with national authorities by providing any necessary documentation or access to logs to prove the AI system’s compliance.
Accessibility Compliance: High-risk AI systems must be designed to ensure accessibility, meaning they must be usable by people with disabilities in accordance with relevant EU directives.
Incident Reporting and Post-Market Monitoring: Providers must monitor the AI system after it’s released to the market. If serious incidents occur, they must report these immediately and investigate any risks.
Authorized Representatives of Providers of High-Risk AI Systems
Providers of high-risk AI systems that are established in third countries (i.e., outside the EU) must appoint an authorized representative within the EU before making their high-risk AI systems available in the Union market. This appointment must be formalized through a written mandate, a legal document that defines the role and tasks of the representative.
The representative acts on behalf of the provider and must fulfill the following key responsibilities:
(a) Verify Conformity Documentation and Procedures
The representative must ensure that:
The EU declaration of conformity has been drawn up, which certifies that the high-risk AI system meets the requirements set out in relevant EU regulations.
The technical documentation has been prepared, which provides detailed information about the design, development, and functionality of the AI system.
The appropriate conformity assessment procedures have been carried out, ensuring the AI system complies with the legal standards before it is made available on the EU market.
(b) Retain Documents for 10 Years
The representative is responsible for keeping important documents, including:
Contact details of the provider.
A copy of the EU declaration of conformity.
Technical documentation.
If applicable, the certificate issued by a notified body (an organization designated to assess conformity).
(c) Provide Information to Authorities
Upon a reasoned request by a competent authority, the representative must provide all necessary information and documentation to demonstrate that the high-risk AI system is compliant with the relevant requirements.
(d) Cooperate with Authorities to Reduce Risks
The representative is required to cooperate with authorities if they take any actions to reduce or mitigate risks posed by the high-risk AI system.
(e) Ensure Compliance with Registration Obligations
The representative must ensure that the high-risk AI system is registered according to the regulations in Article 49(1), which require registration in the EU database for high-risk AI systems. If the provider carries out the registration, the representative must ensure that the information is accurate.
The authorized representative has the right to terminate the mandate if they believe that the provider is acting in violation of the regulations. If this happens, the representative must immediately notify:
The market surveillance authority (the body responsible for enforcing compliance with regulations).
The notified body, if applicable, which would be involved if a certificate of conformity was issued.
Obligations of Importers of High-Risk AI Systems
Conformity Before Market Placement: Importers must verify that a high-risk AI system meets the requirements of the EU AI Act before placing it on the market. This includes ensuring the provider has followed the appropriate conformity assessment procedure outlined in Article 43. This procedure involves checking that the system complies with standards set in the regulation. If the provider applies harmonized standards (Article 40) or common specifications (Article 41), the system may undergo internal checks or a third-party evaluation (via a notified body).
Technical Documentation: The importer needs to confirm that the provider has prepared the necessary technical documentation as required by Article 11 and Annex IV.
Marking and Declaration: Importers must ensure the AI system bears the CE marking, which indicates it complies with EU safety standards, and is accompanied by the EU declaration of conformity as required by Article 47.
Authorized Representative: The provider must appoint an authorized representative within the EU to handle regulatory matters if they are based outside the EU.
Non-Conformity and Falsified Documentation: If an importer suspects a system is non-compliant or that its documentation is falsified, they must prevent its market entry until it is corrected. In cases where the system poses a risk (as outlined in Article 79), the importer must notify the provider, representative, and market surveillance authorities.
Importer Identification: The importer must ensure that their name and contact details are visible on the AI system, its packaging, or accompanying documents. This is crucial for traceability.
Storage and Transport: Importers are responsible for ensuring that the system's storage or transport conditions don’t compromise its compliance with the regulation.
Retention of Documentation: Importers must retain a copy of the EU declaration of conformity, technical documentation, and certificate from the notified body (if applicable) for at least 10 years after the product enters the market.
Cooperation with Authorities: Upon request from regulatory authorities, importers must provide all relevant information and documentation demonstrating compliance, including technical details.
Obligations of Distributors of High-Risk AI Systems
Before a distributor makes a high-risk AI system available on the market, they must ensure:
The system has the CE marking—a sign that it complies with EU safety and legal standards.
The system comes with a copy of the EU declaration of conformity (Article 47), which confirms that it meets the requirements set by EU regulations.
The system has appropriate instructions for use.
Both the provider and importer have complied with their responsibilities under the Regulation.
If a distributor has any doubts, based on the information available to them, that a high-risk AI system is not compliant with the core technical requirements of the AI system, they are prohibited from releasing it until the system meets the necessary standards. If the system poses risks to health, safety, or fundamental rights), the distributor must notify the provider or importer.
Distributors must ensure that, while the AI system is under their control (e.g., during storage or transport), its compliance with the safety and legal requirements is not compromised.
If the distributor finds, based on information available to them, that a high-risk AI system already placed on the market does not conform to the requirements, they must take the necessary corrective actions. These actions may include:
Bringing the system into compliance,
Withdrawing it from the market,
Recalling it from consumers.
If the system poses a risk, the distributor must immediately inform the provider or importer and the relevant authorities, detailing the issue and any corrective measures taken.
Upon request from a competent authority, distributors must supply all relevant information and documentation proving their actions regarding the conformity of the high-risk AI system. Distributors must cooperate with relevant authorities in any action they take concerning high-risk AI systems they’ve made available on the market.
Assumption of Provider Responsibilities
Any distributor, importer, deployer, or other third party can be classified as a "provider" of a high-risk AI system, and thus subject to the obligations of a provider under Article 16, in the following situations:
(a) If they put their name or trademark on an already marketed high-risk AI system, they take on the role of the provider. Even if a contract assigns responsibilities differently, for regulatory purposes, they become the provider.
(b) If they make a substantial modification to a high-risk AI system already on the market, such that it remains a high-risk AI system as defined by Article 6, they are considered the new provider.
(c) If they modify the intended purpose of an AI system (including general-purpose systems) that was not classified as high-risk, but as a result of the modification, the system becomes classified as high-risk under Article 6, they assume the provider role.
When one of the above circumstances occurs, the original provider who first placed the AI system on the market is no longer considered the provider for that specific system. The original provider must cooperate with the new provider by:
Supplying the necessary technical information and access to ensure the new provider can meet their obligations under the regulation, especially for compliance assessments.
However, if the original provider has explicitly stated that their AI system should not be modified to become high-risk, they are not obligated to provide this documentation.
If a high-risk AI system forms part of a product covered by Union harmonization laws (as listed in Annex I, Section A), the product manufacturer is considered the provider under these circumstances:
(a) The system is marketed together with the product under the manufacturer’s name or trademark.
(b) The system is put into service under the manufacturer’s name or trademark after the product is already on the market. This means that the product manufacturer must assume all obligations as the provider of the high-risk AI system.
The provider of a high-risk AI system and any third party supplying AI tools, services, components, or processes used in the system must formalize an agreement that:
Specifies the necessary information, technical capabilities, and assistance required to comply with the regulation.
This rule does not apply to third parties who provide tools or services under a free and open-source license, unless the AI model itself is a general-purpose AI model.
Obligations of Deployers of High-Risk AI Systems
Compliance with Instructions for Use
Deployers of high-risk AI systems must take necessary technical and organizational measures to ensure that the system is used according to the instructions provided by the system's creator or supplier.
Human Oversight and Competence
Deployers must assign natural persons (humans) to oversee the operation of these systems. These overseers must have the appropriate competence, training, authority, and support to handle the AI system responsibly.
Control Over Input Data
When deployers control the input data used by the AI system, they must ensure that the data is relevant and sufficiently representative for the intended purpose of the AI.
Monitoring, Incident Reporting, and Risk Mitigation
Deployers are obligated to monitor the operation of the high-risk AI system in line with the instructions provided. If there is any indication that the system may pose risks, they must inform the system provider or distributor and relevant authorities without delay. If a serious incident occurs, the deployer must immediately report it to the provider, importer, distributor, and market surveillance authorities.
Log Keeping
Deployers must retain automatically generated logs from the AI system for an appropriate period, with a minimum of six months, unless specified otherwise by national or Union law (particularly in data protection legislation).
Workplace Information
When a high-risk AI system is introduced into the workplace, deployers who are also employers must inform the workers and their representatives that such a system is being used. This transparency should align with relevant labor laws and practices.
Registration for Public Authorities
Deployers who are public authorities or entities within the Union must ensure that their high-risk AI systems are registered in the EU database referred to in Article 71. If the system is not registered, they cannot use it and must inform the provider or distributor.
Data Protection Compliance
Deployers must use the information provided under Article 13 of this regulation to comply with data protection impact assessments as required by GDPR (Article 35 of Regulation (EU) 2016/679) or law enforcement directives.
Biometric Identification in Law Enforcement
When law enforcement deploys a high-risk AI system for post-remote biometric identification (such as facial recognition), they must obtain judicial or administrative authorization before or shortly after its use. This authorization must occur within 48 hours, unless it’s for the initial identification of a potential suspect.
If authorization is rejected, the use of the biometric system must stop immediately, and any related personal data must be deleted.
The system cannot be used indiscriminately for law enforcement purposes without a specific link to a criminal case, investigation, or genuine threat.
Law enforcement decisions cannot be based solely on AI output.
Each use of such systems must be documented in the relevant police files and made available to market surveillance or data protection authorities upon request, excluding sensitive law enforcement data. Deployers must also submit annual reports on their use of these systems, although aggregated reports can cover more than one deployment.
Informing Affected Persons
When high-risk AI systems make decisions affecting individuals, deployers must inform the affected persons that they are subject to such AI decisions. For law enforcement use, this must comply with Article 13 of Directive (EU) 2016/680, ensuring transparency and protecting individuals' rights.
Cooperation with Authorities
Deployers are required to cooperate with competent authorities in any action related to the AI system's operation, helping authorities implement regulations and investigate compliance.
Testing High-Risk AI Systems in Real-World Environments, Outside of Regulatory Sandboxes
Scope of Testing in Real-World Conditions
High-risk AI systems can be tested in real-world conditions, outside of regulatory sandboxes. Providers or prospective providers of these systems can conduct testing, including submitting a real-world testing plan.
However, these tests must comply with Article 5, which may include prohibitions on certain uses of AI (for example, potentially harmful applications). The European Commission will further define what the real-world testing plan should include through "implementing acts" (which are detailed legal measures to implement legislation). National or Union law concerning product testing (e.g., products covered by EU harmonization laws) still applies to these systems.
Timing and Conduct of Testing
Providers can conduct real-world tests before the AI system is placed on the market or put into service. Testing can be done either by the provider alone or in partnership with deployers (entities or individuals who implement or use the system).
The testing must respect any ethical review requirements laid down by national or Union law, ensuring ethical standards are maintained.
Conditions for Testing in Real-World Conditions
Testing can only proceed if the following conditions are met:
(a) Testing Plan Submission: A real-world testing plan must be drawn up and submitted to the market surveillance authority in the country where the testing will occur.
(b) Approval by Authorities: The surveillance authority must approve both the testing and the plan. If they don’t respond within 30 days, the plan is considered automatically approved. Some national laws may prevent such "tacit approval," in which case explicit authorization is required.
(c) Registration Requirements: Testing must be registered with a Union-wide unique identification number. Specific systems, such as those related to law enforcement, migration, and border control (Annex III points 1, 6, 7), must be registered in a secure non-public section of the EU database for privacy and security reasons.
(d) Union-Based Legal Representation: Providers must be established in the EU or appoint a legal representative within the EU.
(e) Data Transfers: Data collected during the testing can only be transferred to non-EU countries if they comply with appropriate Union law safeguards.
(f) Duration of Testing: Testing can last up to six months, with a possible extension of another six months, but only if justified and pre-notified to the market surveillance authority.
(g) Protection of Vulnerable Groups: Extra care must be taken to protect individuals belonging to vulnerable groups, such as those with disabilities or age-related vulnerabilities.
(h) Deployers' Awareness and Agreement: If testing involves deployers, they must be informed of all relevant details. A formal agreement between the provider and deployer must specify roles and responsibilities, ensuring compliance with applicable laws.
(i) Informed Consent: Subjects involved in the testing must give informed consent (unless it's law enforcement-related testing where consent could interfere with the test). In such cases, the test must not negatively impact individuals, and any personal data must be deleted afterward.
(j) Oversight: Testing must be overseen by qualified personnel from the provider and deployer, ensuring compliance with testing regulations.
(k) Reversibility of AI Predictions: The outcomes of the AI system (predictions, recommendations, decisions) must be capable of being reversed or disregarded.
Rights of Subjects in Testing
Testing requires obtaining informed consent from individuals participating:
Consent must be freely given and informed.
Participants must receive clear, concise information about the testing's nature, objectives, and any inconveniences.
Participants must be informed about their rights, such as the ability to refuse participation or withdraw without facing any detriment.
They must be told how to request a reversal or disregarding of the AI system’s outputs.
Consent must be documented, dated, and a copy provided to the participant or their legal representative.
Participants in the testing, or their representatives, have the right to withdraw consent at any time without facing any consequences. They can also request the deletion of their personal data, but withdrawal does not affect activities already conducted.
Incident Reporting
Any serious incidents occurring during testing must be reported to the market surveillance authority. Providers must take immediate mitigation measures, or, if necessary, suspend or terminate the testing. Providers must also have a procedure for recalling the AI system in case of such terminations.
Notifying Authorities
Providers must notify the national market surveillance authority about any suspension or termination of the testing and provide the final outcomes.
Fundamental Rights Impact Assessment (FRIA) for High-Risk AI Systems
Who is required to perform the FRIA?
Deployers of high-risk AI systems referred to in Article 6(2), such as public bodies or private entities providing public services, are obligated to perform an FRIA. High-risk AI systems in areas such as biometrics, education, law enforcement, and administration of justice are specifically targeted. However, certain AI systems, like those used in critical infrastructure (e.g., energy, water, traffic), are exempt.
What does the FRIA involve?
A description of how the AI system will be used and the context.
Identification of the individuals or groups likely to be affected.
Evaluation of risks, particularly concerning harm to fundamental rights, and measures for human oversight.
A plan for mitigating risks and handling complaints.
When must the FRIA be updated?
FRIA applies to the first deployment of a high-risk AI system. However, if circumstances change—such as updates to the system or changes in its use—the FRIA must be revised to reflect the new situation.
Data Protection Impact Assessments (DPIA)
If a Data Protection Impact Assessment (DPIA) has already been conducted under GDPR (which covers data protection rights), the FRIA will complement it, focusing on a broader set of fundamental rights beyond just data protection.
Notification and template use
Once the FRIA is completed, the deployer must notify the market surveillance authority. The European AI Office will provide a template questionnaire to streamline this process for deployers.
Conformity Assessment for High-Risk AI Systems
Options for Conformity Assessment (Annex III, Point 1)
Harmonized Standards (Article 40): Providers must choose between two options if they apply harmonized standards or common specifications (Article 41).
Internal Control: This method is described in Annex VI, allowing the provider to internally assess compliance through predefined procedures.
Quality Management System (QMS): If providers opt for this route, it involves a notified body to evaluate the system’s quality management and technical documentation, as detailed in Annex VII.
Exceptions: If harmonized standards are unavailable or only partially applied, the provider must follow the procedure in Annex VII, which mandates a third-party notified body to ensure compliance.
Conformity for Other High-Risk Systems (Annex III, Points 2 to 8)
For AI systems in sectors such as biometrics, education, and law enforcement, the internal control process (as outlined in Annex VI) applies, without the need for external notified bodies.
Substantial Modifications and Learning Systems
A new conformity assessment is required if the AI system undergoes significant changes. However, if the system continues learning within predefined limits, no additional assessment is necessary.
Exceptional Authorization for Public Security or Health Reasons
Market Surveillance Authorities can authorize high-risk AI systems to be placed on the market for exceptional reasons (e.g., public security, protection of life, environmental protection, or safeguarding critical infrastructure) within a Member State. This is a temporary measure while the full conformity assessment is completed. The derogation is allowed only for a limited time, and the assessment process must proceed without undue delay.
In urgent situations, such as an imminent threat to public safety, law enforcement or civil protection authorities can use a high-risk AI system without prior authorization. They must, however, apply for the required authorization either during or immediately after the AI system’s use. If the authorization is denied, the use of the system must cease immediately, and any data or results from its usage must be discarded.
Market surveillance authorities can only issue the authorization if they conclude that the high-risk AI system complies with the fundamental requirements of Section 2 of the AI Act, which covers safety and fundamental rights.
After granting authorization, the market surveillance authority must notify the European Commission and other Member States about the authorization. Sensitive operational data, particularly from law enforcement, is excluded from this reporting. If no Member State or the Commission objects to the authorization within 15 days, the authorization is considered justified. If a Member State or the Commission objects within 15 days, consultations between the Commission and the Member State that issued the authorization are initiated. The relevant stakeholders, including AI system providers, are allowed to present their views. The Commission then decides if the authorization is justified based on the consultations and informs the relevant parties of its decision. If the Commission finds that the authorization was unjustified, the market surveillance authority of the Member State must withdraw the authorization.
EU Declaration of Conformity of High-Risk AI System
The provider of a high-risk AI system is required to create a written EU declaration of conformity. This document must be machine-readable, either physical or electronically signed, and retained for 10 years after the system is placed on the market.
The declaration must state that the AI system complies with the requirements in Section 2 of the Act. It should contain specific information as outlined in Annex V and be translated into a language that is easily understood by the authorities in the Member States where the system is marketed.
If the AI system is also subject to other Union harmonisation legislation, the provider can prepare a single EU declaration of conformity covering all applicable legal frameworks. This helps streamline compliance by consolidating all relevant regulatory requirements into one document.
By issuing the EU declaration, the provider assumes full responsibility for ensuring that the AI system meets the compliance standards set out in Section 2. The declaration must be kept up to date, reflecting any changes in the system's status or updates to its compliance.
Registration of AI Systems
Before placing a high-risk AI system on the market, the provider (or their authorized representative) must register themselves and their system in the EU database (as specified in Article 71). This applies to high-risk AI systems listed in Annex III, except for those in point 2 of Annex III (related to educational or vocational training).
Providers who conclude that their AI system is not high-risk (under Article 6(3)) must also register both the system and themselves in the EU database.
For specific high-risk AI systems related to law enforcement, migration, asylum, and border control (Annex III, points 1, 6, and 7), the registration process must take place in a non-public section of the EU database.
High-risk AI systems listed under point 2 of Annex III (primarily those related to education or vocational training) must be registered at the national level, rather than in the EU database.
Post-Market Monitoring of High-Risk AI Systems
Providers of high-risk AI systems must create and document a post-market monitoring system. The system needs to be proportional to the nature of the AI system and the specific risks associated with it. The "proportionate" aspect means the complexity of the monitoring system should match the complexity and risk of the AI system. For example, an AI used in medical diagnostics may need more detailed and continuous monitoring compared to an AI used for less critical tasks like customer service automation.
Key Points:
The monitoring system must actively and systematically collect, document, and analyze relevant data.
Data can come from deployers (those who actually use the AI system in real-world applications) or from other sources.
This data collection spans the entire lifetime of the AI system.
Providers must ensure the AI system’s continuous compliance with the legal requirements laid out in Chapter III, Section 2 (which refers to specific safety, transparency, and ethical standards).
Monitoring should also include analysis of interactions between the AI system and other AI systems, if relevant.
There is an exemption for law enforcement authorities—providers do not need to monitor sensitive operational data from these bodies.
The idea is that providers should not just launch their AI system and forget about it. They need to constantly gather information about how well the system is performing, whether it continues to meet safety and compliance standards, and if it interacts with other AI systems in a way that could affect safety or performance.
The monitoring system must be backed by a post-market monitoring plan, which is included in the technical documentation (described in Annex IV). The European Commission will adopt an implementing act (a type of regulatory document) by 2 February 2026. This will provide a template for the monitoring plan and list all the elements that need to be included in it.
For high-risk AI systems that are already covered by Union harmonization legislation (other EU laws that require monitoring), providers have the option to integrate their AI monitoring system into the already existing monitoring frameworks, where possible. Providers can incorporate the requirements described above into their existing monitoring systems, provided they achieve an equivalent level of protection.
This flexibility also applies to high-risk AI systems used by financial institutions, which are already subject to specific governance and compliance rules under EU financial services law.
Transparency in AI Systems
AI systems interacting with natural persons
Providers of AI systems designed to directly interact with people must ensure that users know they are interacting with an AI system, unless it is obvious to a reasonably well-informed, observant, and cautious person based on the circumstances.
AI systems used for law enforcement purposes (e.g., detecting, preventing, investigating, or prosecuting criminal offenses) are exempt from this transparency rule, as long as safeguards for individual rights and freedoms are in place. However, if these systems are used to allow the public to report crimes, the obligation to inform users applies.
AI-generated or manipulated content
Providers of AI systems that generate synthetic audio, image, video, or text content must:
Mark the outputs of these systems as artificially generated or manipulated in a machine-readable format, ensuring they are detectable as such.
Ensure the marking technology is effective, interoperable, robust, and reliable, considering the nature of the content, cost, and current technological standards.
This obligation does not apply if the AI system is used for minor edits or enhancements (e.g., AI-assisted photo filters) that do not significantly alter the content. It also does not apply to AI systems used for law enforcement purposes like criminal investigations.
Transparency in emotion recognition and biometric categorization systems
Deployers of AI systems that recognize emotions or categorize individuals based on biometric data must inform individuals that these systems are in use. Additionally, they must comply with relevant privacy laws:
Regulations (EU) 2016/679 (GDPR)
Regulations (EU) 2018/1725
Directive (EU) 2016/680
AI systems used for law enforcement purposes (e.g., criminal investigations) are exempt from this transparency obligation, provided safeguards for rights and freedoms are maintained.
Disclosure of deep fakes and AI-generated text
For AI systems generating or manipulating deep fakes (synthetic or manipulated image, audio, or video content), deployers must disclose that the content is artificially created.
Exceptions:
This obligation does not apply to law enforcement purposes.
For artistic, creative, satirical, or fictional works (e.g., a movie using AI-generated special effects), transparency obligations are relaxed. The only requirement is that there be some disclosure of AI-generated content, but it must not interfere with the artistic experience.
For AI systems generating text meant to inform the public on important matters, deployers must disclose that the text was generated or manipulated by AI unless:
The text is subject to human review or editorial control, and someone takes legal responsibility for the publication.
The system is used for law enforcement purposes.
Timing and clarity of information disclosure
The required information must be communicated clearly and in a manner that is easy to distinguish for the individuals concerned. This disclosure must happen at the first interaction or exposure to the AI system or its content. Additionally, any accessibility requirements (e.g., for individuals with disabilities) must be taken into account.
General-Purpose AI Models
A general-purpose AI model is defined as an AI model that:
Is trained with a large amount of data, often using self-supervision (an approach where the AI learns from unlabeled data at scale).
Shows significant generality, meaning it can perform a wide range of distinct tasks competently, regardless of how it is distributed or marketed.
Can be integrated into various downstream systems or applications, making it highly flexible and adaptable for different uses.
This definition excludes AI models that are only used for research, development, or prototyping purposes before being placed on the market.
General-purpose AI models, such as those behind language models (like GPT), image generators, and recommendation engines, are versatile and can be adapted for numerous applications across industries.
A general-purpose AI system is based on a general-purpose AI model and can serve a variety of direct purposes or be integrated into other AI systems. This system could, for example, power applications like customer service chatbots, image recognition systems, or predictive analytics. The key distinction here is that a general-purpose AI system is an actual functional system built upon a general-purpose AI model, making the model more specific or tailored to particular tasks or industries.
Obligations for Providers of General-Purpose AI Models
(a) Technical Documentation
Providers must maintain up-to-date technical documentation of the AI model. This includes details on the training and testing processes, as well as the results of evaluations. The documentation must meet the requirements laid out in Annex XI and be made available upon request to the AI Office or national authorities. All general-purpose AI model providers must include the following:
General Description of the Model, covering:
Tasks the model is designed for and types of AI systems it can be integrated into.
Acceptable use policies.
Release date and distribution methods.
Architecture and number of parameters.
Input/output modalities (e.g., text, image).
Licensing information.
Detailed Model Development Information, including:
Technical means required for integration into other systems.
Model design specifications, training methodologies, key design choices, and objectives.
Data used for training, testing, and validation, including its source, characteristics, and measures to mitigate biases.
Computational resources used (e.g., floating point operations), training time, and relevant details.
Known or estimated energy consumption during training.
(b) Information for AI System Providers
Providers must prepare and update documentation that helps other providers who intend to integrate the general-purpose AI model into their own AI systems. This documentation should:
Enable these AI system providers to understand the capabilities and limitations of the AI model.
Help the AI system providers comply with their own regulatory obligations.
Contain the minimum information as required in Annex XII:
Tasks the model is designed to perform and types of AI systems it can be integrated into.
Acceptable use policies.
Release date and distribution methods.
Interaction with external hardware or software (if applicable).
Relevant software versions (if applicable).
Model architecture and number of parameters.
Input/output modalities (e.g., text, image) and formats.
Licensing information for the model.
Technical means (instructions, tools, infrastructure) required for integration.
Inputs and outputs modalities, formats, and maximum size (e.g., context window length).
Information on data used for training, testing, and validation, including data type, provenance, and curation methods.
(c) Copyright Policy
Providers must have a policy in place to comply with EU copyright laws, especially ensuring that their AI systems respect any reservation of rights expressed under Article 4(3) of Directive (EU) 2019/790, which deals with copyright and related rights in the digital single market.
(d) Public Summary of Training Data
Providers must publish a summary of the content used to train the AI model, using a template provided by the AI Office. This summary should give sufficient details about the training data, providing transparency while protecting proprietary data where necessary.
Exemptions for Open-Source AI Models
The obligations above do not apply to AI models released under a free and open-source license, provided:
The AI model’s parameters, architecture, and usage information are publicly available.
Note: This exemption does not apply to general-purpose AI models that pose systemic risks (as defined under Article 51).
Use of Codes of Practice and Harmonized Standards
Providers may rely on codes of practice (Article 56) or European harmonized standards to demonstrate compliance with the obligations under this regulation. Until a harmonized standard is published, providers can use these codes to show conformity. Providers who do not adhere to an approved code or standard must demonstrate compliance through alternative means, subject to Commission review.
Authorized Representatives of Providers of General-Purpose AI Models
Providers of general-purpose AI models established outside the EU must appoint an authorized representative based in the EU before placing their models on the Union market. The provider must empower the authorized representative to perform tasks outlined in the mandate, such as ensuring the model complies with relevant regulations.
The authorized representative must carry out specific tasks as per the mandate, including:
The authorized representative must ensure that the technical documentation (Annex XI) is properly drawn up and that all regulatory obligations under Article 53 are fulfilled.
The representative must keep a copy of the technical documentation for 10 years after the AI model has been placed on the market. They must also keep the provider’s contact details on file.
The representative must provide documentation to the AI Office or national authorities upon reasoned request to demonstrate compliance with the regulation.
The representative must cooperate with the AI Office and other competent authorities in any actions related to the AI model, including its integration into downstream AI systems.
The authorized representative can be addressed by the AI Office or authorities, in addition to or instead of the provider, on all issues related to ensuring compliance with this regulation.
If the authorized representative believes the provider is acting contrary to their obligations, they must terminate the mandate and immediately inform the AI Office with reasons for the termination.
The obligation to appoint an authorized representative does not apply to providers of general-purpose AI models released under a free and open-source license, unless the model presents systemic risks.
General-Purpose AI Models with Systemic Risk
An AI model will be classified as having systemic risk if it meets one of the following conditions:
(a) High Impact Capabilities: The model is evaluated using technical tools and methodologies, including benchmarks and indicators, to determine if it has a high impact. A high impact model can affect significant societal areas like privacy, safety, democracy, or economic systems.
(b) Commission Decision: The European Commission, either on its own or after receiving an alert from a scientific panel, can designate an AI model as having systemic risk if it deems the model to have capabilities or impacts similar to those described in point (a). The criteria for this assessment are laid out in Annex XIII of the regulation.
Criteria for Designating General-Purpose AI Models with Systemic Risk
(a) The Number of Parameters of the Model
Parameters are the variables within an AI model that are learned during the training process. The number of parameters is a key indicator of the model’s complexity and capacity to learn and process information.
Large models, like modern large language models (e.g., GPT-4), can have billions or even trillions of parameters, making them highly powerful and capable of handling a wide variety of tasks.
More parameters often mean the model can have broader impacts, as it is capable of understanding and generating more nuanced or complex outputs.
(b) The Quality or Size of the Dataset
This refers to the dataset used to train the AI model, specifically its size and quality.
Size can be measured in terms of the number of tokens (e.g., words or data points) used in training.
Quality refers to the relevance, accuracy, and diversity of the data. High-quality data can make a model more effective and versatile.
A large, high-quality dataset generally enables the model to generalize better across different tasks, potentially increasing its impact and risk due to broader applicability.
(c) The Amount of Computation Used for Training
This criterion looks at the computational resources required to train the AI model, which are measured in floating point operations (FLOPs)—a standard metric for computational intensity.
Other indicators of computational effort include:
Estimated cost of training: Training large AI models often requires significant financial resources.
Training time: Long training periods imply that the model is processing vast amounts of data and computations.
Energy consumption: Training large models can consume enormous amounts of energy, raising concerns about environmental impact.
An AI model is presumed to have high-impact capabilities if the amount of computation used for training exceeds 10^25 floating-point operations (FLOPs), which indicates large-scale computational resources and complexity.
(d) Input and Output Modalities
This criterion focuses on the modalities (types of inputs and outputs) the AI model can handle. Modalities include:
Text to text: Like large language models that take text input and generate text output (e.g., GPT models).
Text to image: Models that generate images based on text prompts (e.g., DALL·E).
Multi-modality: Models capable of handling different types of inputs and outputs simultaneously, such as combining text, image, audio, and video processing.
Biological sequences: Specialized AI models that process biological data, such as genetic sequences.
(e) Benchmarks and Evaluations of the Model’s Capabilities
This criterion refers to the performance benchmarks used to evaluate the AI model’s capabilities, including:
The number of tasks it can perform without needing further training (showing versatility and generality).
Adaptability to new tasks: How easily the model can be fine-tuned or retrained to handle new, distinct tasks.
Autonomy: The model’s ability to operate independently without continuous human oversight.
Scalability: How well the model’s capabilities scale as it is deployed in different environments or across different industries.
Tools access: If the model has access to external tools (e.g., APIs), it may enhance its capabilities further, increasing its potential impact.
(f) High Impact on the Internal Market
This criterion assesses the model’s reach within the European Union, particularly its availability to businesses.
A model will be presumed to have a high impact on the EU’s internal market if it has been made available to at least 10,000 registered business users.
(g) The Number of Registered End-Users
The number of end-users is another important factor in assessing the model’s overall impact.
A large user base indicates that the model has extensive reach, which means it could affect a broad range of people, businesses, or industries. This widespread adoption heightens the model's potential to create societal or economic risks.
The European Commission is empowered to adopt delegated acts (legally binding acts) to:
Amend the thresholds (such as the 10^25 FLOP requirement) based on advances in technology, like algorithmic improvements or hardware efficiency.
Update benchmarks and indicators to ensure that risk assessments keep up with the evolving capabilities of AI systems.
Procedures for Managing Systemic Risk AI Models
If a provider develops a general-purpose AI model that meets the high-impact criteria, they must notify the European Commission within two weeks of becoming aware that the criteria have been met. This notification must include evidence that the model meets the high-impact requirement.
Additionally, if the Commission learns about a high-risk AI model that hasn't been reported, it has the authority to classify it as an AI model with systemic risk on its own.
The provider can present arguments to show that, despite meeting the technical threshold for systemic risk (e.g., the FLOP threshold), the model does not pose systemic risks due to its specific characteristics. This could happen, for example, if the model is tightly controlled or used in a manner that mitigates the risk. However, these arguments must be well-substantiated.
If the Commission finds that the provider’s arguments are not sufficiently convincing, the model will remain classified as having systemic risk. The decision to classify the model will be based on the failure to demonstrate that the model's unique characteristics mitigate the potential risks.
Providers can request a reassessment of the systemic risk designation, but this can only happen six months after the initial designation. The provider must present new and objective reasons for the reassessment.
The European Commission will maintain and publish a list of general-purpose AI models that are classified as having systemic risk. This list will be kept up to date, but the publication must respect intellectual property rights, business confidentiality, and trade secrets, as required by EU and national laws.
Obligations for Providers of AI Models with Systemic Risk
Beyond the general duties mentioned above, the specific obligations are:
Model Evaluation with Adversarial Testing
AI providers must assess their models using standardized protocols and state-of-the-art tools. This involves testing the AI system against potential attacks or attempts to manipulate it. For example, an adversarial test might simulate a scenario where someone tries to trick an AI system into making incorrect decisions. The goal is to identify vulnerabilities and mitigate the risks associated with them.
Risk Assessment at Union Level
Providers must assess systemic risks at the EU level, considering the impact the AI model might have within the European Union. These could range from disruptions in critical infrastructure (e.g., energy grids) to widespread misinformation or privacy violations.
Incident Reporting
Providers must keep track of serious incidents and report them without delay to relevant authorities, including the AI Office and, if necessary, national authorities. Serious incidents could involve unexpected failures, malicious use, or significant impacts on public safety. The provider must also document and implement corrective measures to address these incidents.
Cybersecurity Protection
Providers are responsible for ensuring that the AI model and its physical infrastructure (e.g., servers and databases) are adequately protected from cyber threats. This could include encryption, access controls, regular security audits, and intrusion detection systems.
Codes of Practice and Harmonised Standards
Providers can rely on codes of practice (voluntary guidelines or industry standards) to meet their obligations until formal EU-wide harmonised standards are published. Harmonised standards: These are official, EU-endorsed technical specifications that provide a benchmark for compliance. Once these standards are published, providers who follow them are assumed to be compliant with the law.
If a provider doesn't follow a code of practice or a harmonized standard, they must demonstrate an alternative way of complying with the requirements, subject to approval by the European Commission.
Additional Information for Models with Systemic Risk
For general-purpose AI models classified as having systemic risk, additional details are required in the technical documentation of the AI model:
Evaluation Strategies: Description of evaluation criteria, results, and limitations, using public or internal evaluation methods.
Adversarial Testing: Details on internal or external testing (e.g., red teaming), model adaptations, alignment, and fine-tuning processes.
System Architecture: Explanation of how software components work together within the model.
AI Regulatory Sandboxes
Establishment of AI Regulatory Sandboxes at the National Level
Each Member State is required to set up at least one AI regulatory sandbox by 2 August 2026. These sandboxes provide controlled environments for AI developers to experiment with new AI systems under regulatory supervision before entering the market.
A sandbox can be established jointly with other Member States' competent authorities. This collaboration helps smaller states or regions pool resources and knowledge to support AI development. Participation in an existing sandbox is acceptable if it provides national coverage comparable to a standalone sandbox.
Regional and Cross-Border Sandboxes
Beyond the national level, additional sandboxes may be established at regional or local levels or in cooperation with other Member States.
AI Regulatory Sandboxes for EU Institutions
The European Data Protection Supervisor (EDPS) has the authority to create AI regulatory sandboxes specifically for EU institutions, bodies, offices, and agencies. The EDPS can fulfill the roles and tasks of national competent authorities in these cases, ensuring compliance with the AI Act for Union-level entities.
Structure and Purpose of AI Regulatory Sandboxes
AI regulatory sandboxes provide a controlled environment where AI systems can be developed, tested, and validated under supervision. These tests can involve real-world conditions and aim to foster innovation while identifying and mitigating risks, particularly regarding fundamental rights, health, and safety. The sandbox operates for a limited time under a pre-agreed plan between the AI provider and the supervising authority.
Documentation and Exit Reports
Upon completing their participation in the sandbox, AI providers will receive written proof of the activities carried out. Authorities will issue an exit report, detailing the results and lessons learned during the sandbox process. These documents can then be used by providers to demonstrate their compliance with the AI Act in the conformity assessment process or other market surveillance activities. The reports may also accelerate regulatory approvals.
Access to Exit Reports and Confidentiality
While exit reports are generally confidential, the European Commission and the AI Board may access them to aid in regulatory oversight. If both the AI provider and the national authority agree, exit reports may also be made public to promote transparency and share knowledge within the AI ecosystem.
Objectives of the AI Regulatory Sandboxes
The sandboxes aim to:
Improve legal certainty by helping AI developers understand and comply with the AI Act.
Foster best practice sharing among authorities.
Encourage innovation and strengthen the AI ecosystem within the EU.
Contribute to evidence-based regulatory learning to improve future AI regulations.
Facilitate faster market access for AI systems, especially for SMEs and start-ups.
Coordination with Data Protection Authorities
If AI systems in the sandbox involve the processing of personal data or require supervision from other national authorities, data protection agencies and other relevant bodies must be involved in the sandbox to ensure compliance with applicable data protection laws.
Risk Mitigation and Authority Supervision
Competent authorities have the power to suspend sandbox activities if significant risks to health, safety, or fundamental rights are detected, especially if no effective mitigation measures can be implemented.
Liability and Protection for AI Providers
AI providers participating in sandboxes remain liable under applicable Union and national laws for any damages caused during sandbox testing. However, if providers follow the agreed sandbox plan and comply with guidance from the supervising authority, they will not face administrative fines for infringements related to the AI Act or other laws overseen within the sandbox.
Centralized Platform and Stakeholder Interaction
The European Commission will create a centralized interface for AI regulatory sandboxes. This platform will provide relevant information and allow stakeholders to interact with authorities, seek regulatory guidance, and monitor sandbox activities. It will help streamline communication and foster a collaborative environment for AI innovation across the EU.
Uniformity Across the Union
The European Commission will adopt implementing acts to ensure that the setup, operation, and supervision of AI regulatory sandboxes are consistent across all Member States, to prevent fragmentation and confusion.
Eligibility and selection criteria will be transparent and fair. This means any provider or prospective provider of an AI system who meets the set criteria can apply for participation in a sandbox. National authorities will inform applicants of their decision within three months, ensuring a predictable timeline.
Broad Access
AI sandboxes will be open to partnerships between providers, deployers, and other relevant third parties. This broadens opportunities to collaborate with other stakeholders in the AI ecosystem, such as SMEs, researchers, and testing labs. Importantly, participation in one Member State’s sandbox will be mutually recognized across the EU.
SMEs and start-ups can participate in the sandbox free of charge, except for any exceptional costs that authorities may recover fairly.
Focus on Testing Tools and Risk Mitigation
The sandboxes will facilitate the development of tools to assess key aspects of AI systems, such as accuracy, robustness, cybersecurity, and other dimensions important for regulatory learning. Authorities will assist in developing measures to mitigate risks to fundamental rights and societal impacts, helping ensure that a system aligns with EU values and safety standards.
If an AI system needs testing in real-world conditions, this can be arranged within the sandbox. However, such testing will require specific safeguards agreed upon with national authorities to protect fundamental rights, health, and safety. Cross-border cooperation may also be required to ensure consistent practices in real-world testing.
Supervision of Testing in Real World Conditions
Surveillance authorities are responsible for ensuring that real-world testing of AI systems is conducted in compliance with this regulation. When AI systems are tested in regulatory sandboxes (controlled environments for testing new technologies), surveillance authorities ensure compliance with specific rules and may allow certain exceptions during testing.
Authorities can suspend, terminate, or modify real-world testing if serious issues are detected or if the testing does not comply with Articles 60 and 61 (concerning testing conditions and risk management). These decisions must be justified and can be challenged by the provider.
Support Measures for Small and Medium-sized Enterprises (SMEs), including Start-Ups
Member State Actions
Priority Access to AI Regulatory Sandboxes: SMEs, including start-ups, with a registered office or branch in the EU, are given priority access to AI regulatory sandboxes, assuming they meet the eligibility conditions and selection criteria. This priority, however, does not exclude other SMEs or start-ups from access, provided they also meet the criteria.
Awareness Raising and Training: Member States are tasked with organizing specific awareness campaigns and training programs on how this regulation applies to SMEs and start-ups.
Communication Channels: Existing channels or newly created ones should be used to facilitate communication between SMEs, start-ups, deployers, and local authorities.
Standardisation Process Participation: Member States should help SMEs participate in standardisation processes. Standardisation refers to the creation of uniform technical specifications, which helps ensure that products and services are consistent across the EU, fostering innovation and safety.
Fee Adjustments for SMEs
When it comes to conformity assessments (referred to in Article 43 of the regulation), the fees are adjusted to account for the specific needs and characteristics of SMEs and start-ups. Factors such as the size of the company, market presence, and other relevant indicators are used to proportionally reduce the fees.
AI Office Actions
Standardised Templates: The AI Office should provide standardized templates that help SMEs and others meet their regulatory obligations.
Information Platform: A single, user-friendly information platform should be developed for all operators in the EU.
Communication Campaigns: The AI Office is tasked with raising awareness through campaigns to inform companies about their obligations under the AI regulation.
Public Procurement Best Practices: The office is also responsible for promoting best practices in public procurement processes when it comes to acquiring AI systems.
Simplified Compliance for Microenterprises
Microenterprise, as defined by the 2003/361/EC Recommendation, is an enterprise which employs fewer than 10 persons and whose annual turnover and/or annual balance sheet total does not exceed EUR 2 million. These companies are granted a simplified approach to comply with certain elements of the quality management system required by the regulation under Article 17.
These microenterprises can adopt a more straightforward version of the quality management system to meet the regulation's requirements. The European Commission will issue guidelines outlining which elements of the system can be simplified, making it easier for smaller businesses to comply without reducing the required protection standards, especially for high-risk AI systems.
While microenterprises are allowed to follow a simplified process for certain parts of the quality management system, they are not exempt from other regulatory obligations. Specifically, they must still comply with the following key Articles from the regulation:
Article 9: Relates to the risk management system, requiring companies to identify and mitigate risks associated with AI systems.
Article 10: Concerns the data and data governance requirements that ensure the quality, relevance, and accuracy of the data used to train and test AI systems.
Article 11: Focuses on ensuring continuous testing and evaluation of AI systems throughout their lifecycle.
Article 12: Involves record-keeping requirements, which obligate businesses to maintain logs related to the operation of high-risk AI systems.
Article 13: Discusses transparency and the obligation to provide adequate information to users and deployers of high-risk AI systems.
Article 14: Requires human oversight to ensure that AI systems are used appropriately, especially in high-risk environments.
Article 15: Establishes requirements for accuracy, robustness, and cybersecurity of AI systems.
Articles 72 and 73: Relate to post-market monitoring and the necessary surveillance activities that ensure ongoing compliance of AI systems after they are placed on the market.
European Artificial Intelligence Board
The EAI Board is a key governance mechanism outlined in European Union regulations to ensure consistent and effective oversight of AI technologies across Member States. The EAI Board is created to facilitate the coordination and consistency in applying the EU's regulations on artificial intelligence (AI).
The Board consists of one representative from each Member State of the EU. Additionally, the European Data Protection Supervisor participates as an observer, and the AI Office attends but does not participate in voting. Other authorities, bodies, or experts from national or EU levels may be invited to meetings on relevant issues, but they do not have voting rights.
Each representative is appointed by their Member State for a term of three years, renewable once. These representatives are responsible for ensuring that their country's AI regulations align with the broader EU framework and for coordinating AI activities across national authorities:
Representatives must have the skills and authority to contribute to the Board’s work.
Each representative is the primary contact for the Board and possibly for national stakeholders, depending on the Member State’s needs.
They are responsible for ensuring consistent application of AI regulations within their country and for gathering necessary data to inform the Board’s activities.
The Board operates based on rules adopted by a two-thirds majority vote among the representatives. These rules define the procedures for electing the Chair, setting mandates, voting protocols, and organizing the Board’s activities.
The Board establishes two standing sub-groups:
Market Surveillance: Acts as an Administrative Cooperation Group (ADCO), overseeing AI systems' compliance and market regulations.
Notifying Authorities: Facilitates coordination among authorities responsible for notifying and certifying AI systems.
The Board can create additional standing or temporary sub-groups for specific issues, and representatives from the advisory forum can be invited as observers.
The Board's primary function is to assist and advise the European Commission and Member States to ensure the consistent and effective application of AI regulations. Key tasks include:
Coordination of National Authorities: Promoting cooperation among national bodies responsible for AI regulation.
Expertise Sharing: Gathering and distributing technical and regulatory knowledge across Member States, especially in emerging AI areas.
Advice on Enforcement: Offering guidance on enforcing AI-related rules, particularly for general-purpose AI models.
Harmonization of Practices: Supporting the alignment of administrative procedures, such as the functioning of AI regulatory sandboxes and real-world testing environments.
Recommendations and Opinions: The Board can issue recommendations on any aspect of AI regulation, including codes of conduct, standards, and updates to the regulation itself.
Promotion of AI Literacy: The Board helps raise awareness of AI risks, benefits, and safeguards among the public and stakeholders.
Cooperation with Other Bodies: Working with other EU institutions, agencies, and international organizations to ensure a unified approach to AI regulation.
An advisory forum is established to provide technical expertise and advice to the Board and the Commission.
Composition: The forum includes a balanced group of stakeholders representing industry (including startups and SMEs), civil society, and academia. It also includes permanent members from EU standardization bodies such as ENISA, CEN, CENELEC, and ETSI.
Tasks:
The forum advises the Board and the Commission on AI matters and can prepare opinions and recommendations.
It meets at least twice a year and may invite experts to its meetings for specific issues.
Governance: The forum elects two co-chairs for a two-year term (renewable), and it can create sub-groups to focus on specific topics. The forum also prepares an annual report on its activities, which is publicly available.
National Competent Authorities
Each Member State is required to designate at least two types of authorities for the purposes of the regulation:
A notifying authority: Responsible for overseeing the conformity and compliance of AI systems that are notified or certified under EU law.
A market surveillance authority: In charge of monitoring and ensuring AI systems in the market comply with regulations, particularly in relation to safety, health, and standards.
These authorities must act independently and impartially, meaning they cannot be influenced by external factors, and they should focus solely on the proper implementation of the regulation.
Member States have flexibility in how they organize these authorities. They can appoint multiple authorities to perform these tasks, or consolidate the responsibilities within one or more authorities, depending on their internal organizational needs, as long as they adhere to the principles of independence and objectivity.
By August 2, 2025, Member States are required to make information about these competent authorities and single points of contact publicly available, especially through electronic means.
Each Member State must designate a market surveillance authority as the single point of contact for the regulation. This authority will be the central entity responsible for liaising with both the Commission and other stakeholders on AI regulatory matters. The Commission will make this list public, allowing easy access to the designated contact points in each country.
By August 2, 2025, and every two years thereafter, Member States must report to the Commission on the financial and human resources available to their national competent authorities. This reporting includes an assessment of whether those resources are adequate. The Commission will then pass this information to the European Artificial Intelligence Board (EAI Board) for review and possible recommendations on how to address any deficiencies.
Market Surveillance and Control of AI Systems in the Union Market
Market surveillance authorities (MSAs) must annually report to the European Commission and national competition authorities about AI market activity that may affect competition law. They also report annually on the prohibited practices they encountered and actions taken. For high-risk AI systems linked to products covered by existing EU harmonization laws, the same authorities designated under those laws will act as surveillance authorities.
Member States can assign other authorities to manage AI system surveillance, provided they ensure coordination with sectoral authorities.
If existing sectoral laws already provide adequate safety and surveillance procedures for certain products (such as medical devices), these procedures will apply, rather than the new AI-specific regulations.
Market surveillance authorities are empowered to carry out remote inspections and enforcement actions to ensure compliance with AI regulations, such as accessing data from manufacturers.
Surveillance of high-risk AI used by financial institutions falls under the authority of national financial regulators. Other relevant authorities may also be involved, provided coordination is ensured. For banks involved in the Single Supervisory Mechanism, surveillance findings relevant to financial supervision must be reported to the European Central Bank.
High-risk AI systems used in sensitive areas like law enforcement or border management must be supervised by data protection authorities or other relevant authorities.
The European Data Protection Supervisor is the surveillance authority for EU institutions, except the European Court of Justice when acting in a judicial capacity.
Market surveillance authorities and the European Commission can propose joint investigations or activities to promote AI compliance and identify non-compliance across multiple Member States. The AI Office helps coordinate these efforts.
Surveillance authorities must have access to the documentation, training data, and validation datasets used to develop high-risk AI systems, possibly through APIs or other technical means, subject to security measures. In specific cases, where necessary for assessing compliance, authorities can request access to the source code of AI systems after other verification methods have been exhausted.
Procedure for AI Systems that Present a Risk
AI systems presenting a risk are treated as "products presenting a risk" under Article 3, point 19 of Regulation (EU) 2019/1020. These systems are flagged if they endanger health, safety, or fundamental rights.
When a national market surveillance authority (MSA) identifies an AI system as risky, they evaluate whether it complies with EU AI regulations. Special attention is given to systems affecting vulnerable groups, and the authority must cooperate with other relevant national bodies, particularly where risks to fundamental rights are involved.
If the AI system does not comply, the authority demands corrective actions (e.g., withdrawal, recall, or compliance adjustments), which must happen within 15 working days or sooner if required by harmonized laws.
If non-compliance is not limited to one country, the national MSA must notify the European Commission and other EU Member States about the risk and actions being taken.
The operator (entity deploying the AI system) is responsible for taking necessary corrective actions across all markets in the EU if an issue is identified. If the operator fails to do so within the prescribed time, the MSA can implement provisional measures such as prohibiting the sale or use of the AI system within the country.
When corrective measures are imposed, the MSA must share detailed information about the AI system's risks with the Commission and other Member States, including data about non-compliance, origin, and the supply chain. The notification must specify whether the non-compliance stems from:
Prohibited AI practices (e.g., AI systems manipulating behavior).
High-risk AI systems failing to meet obligations (covered in Chapter III, Section 2).
Failures in meeting standards for presumed compliance.
Breaches of transparency requirements.
Other national MSAs will share any additional information they have on the AI system and notify the Commission of their own measures. If they disagree with the initial MSA’s actions, they must raise objections. If no objections are raised within three months, the corrective measures are considered justified and enforced across the EU.
Special Considerations for AI Systems Misclassified as Non-high-risk
If a market surveillance authority believes a system classified as non-high-risk should be considered high-risk, it will evaluate the system based on Annex III (which lists criteria for high-risk AI). If reclassification to high-risk is necessary, the provider is required to take corrective actions to bring the system in compliance with regulations.
The market surveillance authority must inform the Commission and other EU Member States of the results if the reclassification impacts AI systems deployed across borders.
Providers that intentionally misclassify AI systems to evade high-risk requirements face fines as outlined below.
Enforcement of General-Purpose AI Model Obligations
The European Commission is the main authority responsible for supervising and enforcing rules related to general-purpose AI models. To handle these tasks, the Commission will delegate responsibilities to a specialized body called the AI Office. This does not interfere with how tasks are divided between the EU and its Member States.
If a national market surveillance authority (like a country's consumer safety body) needs help enforcing the AI rules, it can request that the Commission steps in. This is only done if it’s necessary and proportional to the task.
The AI Office is responsible for monitoring whether providers of general-purpose AI models are complying with the AI Act. This includes checking if they follow approved codes of practice, which are guidelines they voluntarily agree to follow.
Any business or individual that uses a general-purpose AI model (referred to as a “downstream provider”) can file a complaint if they believe the AI model provider has violated the regulations. A valid complaint must:
Include the contact details of the AI provider,
Provide a clear description of the violation,
Offer any additional relevant information to support the claim.
The Commission can ask AI providers to provide documentation and information, such as details about how their models are tested for safety and how they comply with regulations. Before formally requesting information, the AI Office may first engage the provider in a structured dialogue to clarify any concerns or gather preliminary information.
When the Commission requests information, they must explain:
The legal basis for the request,
The purpose of the request,
What specific information is needed,
The deadline for providing the information, and
The penalties for providing incorrect or incomplete information.
The AI provider must supply the requested information. If the provider is a legal entity (like a corporation), its authorized representative or lawyer can handle the submission, but the provider remains responsible for the accuracy.
If the information provided by the AI provider is insufficient, or if the AI model is believed to pose a systemic risk, the AI Office can conduct its own evaluation of the AI model to check compliance with the rules.
The Commission can hire independent experts (including those from the scientific panel) to conduct the evaluation on its behalf. The Commission can request technical access to an AI model, such as through its APIs (application programming interfaces) or even access to its source code, in order to perform the evaluation.The request for access must include:
The legal basis and reasons for the request,
The deadline for providing access, and
The penalties for non-compliance.
Like with information requests, the AI provider (or its legal representative) must comply with the access request. The Commission will issue further detailed rules on how these evaluations should take place and how independent experts are involved.
If necessary, the Commission can ask AI providers to take specific corrective actions, such as:
Ensuring compliance with legal obligations,
Implementing risk mitigation measures if a serious risk is identified,
Removing the AI model from the market if it poses significant risks.
If the AI provider offers to implement appropriate measures to mitigate identified risks, the Commission can make these commitments legally binding, and no further action would be necessary.
Penalties
Member States are responsible for setting penalties and enforcement measures for violations of the regulation by AI operators. These measures can include both monetary and non-monetary penalties (such as warnings). Penalties must be effective, proportionate, and dissuasive. This means they should effectively discourage non-compliance without being unnecessarily harsh. Member States should also consider the impact on SMEs, including startups, ensuring that penalties do not disproportionately harm their economic viability.
Member States must notify the European Commission about their penalty rules by the time the regulation comes into effect. Any future changes to these rules must also be promptly communicated to the Commission.
Non-compliance with the prohibition of AI practices in Article 5 (covering the prohibited AI practices, explained above) can result in administrative fines of up to EUR 35 million or 7% of the violator's total worldwide annual turnover, whichever is higher.
Non-compliance with other obligations (e.g., transparency, obligations of providers or distributors, etc.) can lead to fines up to EUR 15 million or 3% of total worldwide turnover, whichever is higher.
Supplying incorrect, incomplete, or misleading information to authorities or notified bodies may result in fines up to EUR 7.5 million or 1% of global annual turnover.
For SMEs (including startups), the maximum fines are either a percentage of their turnover or a set amount (whichever is lower).
When deciding on fines, authorities will consider factors such as:
The nature, gravity, and duration of the infringement.
Whether the infringement affected people and to what extent.
Whether the operator cooperated with authorities to remedy the issue.
The economic benefit gained from non-compliance.
The intentional or negligent character of the violation.
This ensures that penalties are tailored to the specific context of the violation.
Depending on the legal system, Member States may allow fines to be imposed by national courts or other bodies. The mechanism used must have the same effect as the fines imposed under this regulation.
* * *
Prokopiev Law Group offers expert guidance on AI regulatory landscape, ensuring full compliance with complex frameworks like the EU AI Act. With our experience and a global network of partners, we help clients meet AI compliance requirements in every jurisdiction, avoiding costly fines and operational setbacks. Whether you are developing or deploying AI systems across borders, our firm has the expertise to advise on regulatory obligations, from data protection to AI risk management. Contact us today to ensure your business is fully compliant and prepared for the future of AI regulation.
The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
Comments