The European Union has enacted a regulation on artificial intelligence (AI) designed to stimulate innovation, ensure the trustworthiness of AI systems, and safeguard fundamental rights (the Regulation or the AI Act). This Regulation provides standardized rules and responsibilities for providers, deployers, and users of AI systems within the EU. It also extends to third-country entities whose AI systems impact the EU market or individuals within the EU. Additionally, the Regulation establishes governance structures, enforcement mechanisms, and penalties for non-compliance at both EU and national levels.
Legal Basis and Scope
The AI Act is established on the foundation of Articles 16 and 114 of the Treaty on the Functioning of the European Union (TFEU). It aims to improve the internal market by creating a legal framework specifically for the development, market placement, and usage of artificial intelligence (AI) systems within the Union.
Uniform Legal Framework
AI systems can be deployed across various sectors and regions and easily circulate throughout the Union. Diverging national rules can fragment the internal market and reduce legal certainty for operators. Therefore, the AI Act ensures a consistently high level of protection across the Union, promoting trustworthy AI while preventing obstacles to free circulation, innovation, deployment, and uptake of AI systems.
Complementarity with Existing Laws
The Regulation complements Union laws on data protection, consumer protection, fundamental rights, employment, and product safety. It does not affect the rights and remedies such acts provide, including compensation for damages and social policy laws related to employment and working conditions.
Exclusions
AI systems developed solely for scientific research and development are excluded from the Regulation's scope until market placement or service provision. Additionally, AI systems for military defense or national security purposes are excluded. However, if these systems are used for civilian purposes, they must comply with the AI Act.
Data Protection Compliance
The Regulation complements existing data protection laws, ensuring AI systems processing personal data adhere to the General Data Protection Regulation (GDPR) and other relevant regulations. It does not seek to alter the application of existing Union laws governing personal data processing but rather facilitates the effective implementation and exercise of data subjects' rights and remedies.
Third-Country Entities
The Regulation applies to AI systems that are not placed on the market within the European Union but whose outputs are utilized within the Union. This includes scenarios where:
Contractual Agreements: An operator based in the EU contracts services involving AI systems from an operator established in a third country. The AI system processes data lawfully collected within the EU and transfers the output back to the EU operator for utilization within the Union.
Impact on Individuals: The AI Act applies to AI systems used in a third country that produce outputs affecting individuals within the EU, regardless of the system's physical location or the operator's establishment.
The Regulation does not apply to public authorities of third countries or international organizations when acting within the framework of cooperation or international agreements concluded at the Union or national level for law enforcement and judicial cooperation. These entities are exempted provided they offer adequate safeguards for the protection of fundamental rights and freedoms. This includes:
Bilateral Agreements: Agreements established between Member States and third countries or between the EU, its agencies, and international organizations.
Adequate Safeguards: The relevant authorities assess whether these agreements include sufficient safeguards for the protection of fundamental rights and freedoms.
Prohibited AI Practices
1. Manipulative Techniques
AI systems that employ subliminal components or other manipulative techniques designed to distort human behavior in a manner that causes or is likely to cause significant harm are strictly prohibited. These manipulative techniques include but are not limited to, the use of stimuli beyond human perception to nudge individuals towards specific behaviors, significantly impairing their autonomy, decision-making, and free choice.
2. Exploitation of Vulnerabilities
AI systems that exploit the vulnerabilities of specific groups due to their age, disability, or social and economic conditions, resulting in behaviors that materially distort their actions and cause significant harm, are banned. This includes AI systems that exploit individuals' lack of understanding or capacity to resist specific influences, leading to detrimental outcomes.
3. Social Scoring by Public Authorities
AI systems utilized by public authorities for social scoring, which leads to discriminatory outcomes or unjustly limits individuals' access to essential services, are prohibited. For example, systems that evaluate or classify individuals based on their social behavior, personal characteristics, or predicted behavior across various contexts, resulting in detrimental treatment unrelated to the original data context.
4. Remote Biometric Identification in Public Spaces for Law
Using real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes is generally prohibited. Exceptions are strictly limited to narrowly defined situations where such use is necessary to achieve a substantial public interest that outweighs the risks. These situations include:
Locating or identifying missing persons, including victims of crime.
Preventing imminent threats to life or physical safety, such as terrorist attacks.
Identifying perpetrators or suspects of serious criminal offenses listed in an annex to the AI Act, where the offense is punishable by a custodial sentence of at least four years.
The use of such systems must be subject to prior judicial or independent administrative authorization, except in cases of urgency where obtaining prior authorization is impractical. In such urgent cases, the use must be limited to the minimum necessary duration, and the reasons for not obtaining prior authorization must be documented and submitted for approval as soon as possible.
5. Biometric Categorization and Emotion Recognition
AI systems used for biometric categorization, which assign individuals to specific categories based on biometric data, are prohibited if they result in discrimination or harm fundamental rights. Additionally, AI systems intended for emotion recognition in sensitive contexts such as workplaces or educational settings are banned due to their potential for misuse and the significant privacy risks involved.
Risk Assessment and Mitigation
Providers and deployers of AI systems must conduct risk assessments to ensure their systems do not fall into the prohibited categories. This includes evaluating the potential impact on individuals' autonomy, decision-making, and fundamental rights.
Transparency and accountability measures must be in place to ensure compliance with these prohibitions, including maintaining documentation of AI system design, development, and deployment processes, allowing for effective monitoring and enforcement by relevant authorities.
High-Risk AI Systems
1. General Criteria for Classification of High-Risk AI Systems
An AI system is classified as high-risk if it meets specific conditions relating to safety components and conformity assessments. These conditions are detailed with reference to the Union harmonization legislation listed in Annex I of the Regulation. The legislation includes:
Regulation (EC) No 300/2008: Concerning the safety and security of civil aviation.
Regulation (EU) No 167/2013: Regarding the approval and market surveillance of agricultural and forestry vehicles.
Regulation (EU) No 168/2013: Relating to the approval and market surveillance of two- or three-wheel vehicles and quadricycles.
Directive 2014/90/EU: On marine equipment, ensuring the compliance of equipment used on EU ships.
Directive (EU) 2016/797: On the interoperability of the rail system within the European Union.
Regulation (EU) 2018/858: On the approval and market surveillance of motor vehicles and their trailers, and systems, components, and separate technical units intended for such vehicles.
Regulation (EU) 2018/1139: Establishing common rules in the field of civil aviation and establishing a European Union Aviation Safety Agency.
Regulation (EU) 2019/2144: On type-approval requirements for motor vehicles and their trailers, and systems, components, and separate technical units intended for such vehicles, with a focus on general safety and the protection of vehicle occupants and vulnerable road users.
2. Additional Criteria
In addition to the criteria mentioned above, AI systems listed in Annex III are also classified as high-risk. These systems include those used in:
Biometrics: Remote biometric identification systems, biometric categorization, and emotion recognition systems.
Critical Infrastructure: AI systems used in the management and operation of critical digital infrastructure, road traffic, and the supply of water, gas, heating, or electricity.
Education and Vocational Training: Systems determining access or admission to educational institutions, evaluating learning outcomes, and monitoring prohibited behavior during tests.
Employment and Workforce Management: AI systems used for recruitment, selection, monitoring, and performance evaluation of employees.
Essential Services and Benefits: Systems used by public authorities for evaluating eligibility for public assistance, creditworthiness, risk assessment in life and health insurance, and emergency response services.
3. Exemptions
An AI system will not be considered high-risk if it does not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons, including not materially influencing the outcome of decision-making. This applies under specific conditions:
The AI system is intended to perform a narrow procedural task.
It is designed to improve the result of a previously completed human activity.
It detects decision-making patterns or deviations without replacing or influencing the human assessment.
It performs a preparatory task relevant to the assessment purposes listed in Annex III. However, AI systems referred to in Annex III that perform profiling of natural persons are always considered high-risk.
Providers who consider their AI systems, listed in Annex III, as not high-risk must document their assessment before placing the system on the market. These providers are subject to the registration obligation set out in Article 49(2). Upon request, they must provide the assessment documentation to national competent authorities.
Compliance and Enforcement
1. General Obligations
Providers of high-risk AI systems must ensure that their systems comply with the requirements set out in the AI Act before they are placed on the market or put into service. These obligations include:
Risk Management System: Providers must establish and implement a risk management system that identifies, analyzes, and mitigates risks associated with the AI system throughout its lifecycle. This includes both pre-market and post-market activities.
Quality Management System: Providers must establish a quality management system that ensures the AI system consistently meets the requirements of the Regulation. This system must include documented policies and procedures for design, development, testing, and monitoring.
Technical Documentation: Providers must prepare and maintain detailed technical documentation for each AI system. This documentation must include information on the system's design, development, testing, and risk management measures.
Conformity Assessment: Providers must ensure that the AI system undergoes the appropriate conformity assessment procedure before it is placed on the market or put into service. This includes ensuring that the system meets all applicable requirements and standards.
Post-Market Monitoring: Providers must establish and maintain a post-market monitoring system to continuously assess the AI system's performance and safety. This includes collecting and analyzing data on the system's operation and any incidents or malfunctions.
2. Specific Requirements
Providers must also ensure compliance with the following specific requirements for high-risk AI systems:
Human Oversight: Providers must design AI systems to enable effective human oversight, ensuring that individuals can intervene in the system's operation and prevent or mitigate potential harm.
Accuracy, Robustness, and Cybersecurity: Providers must ensure that the AI system is accurate, robust, and secure. This includes implementing measures to protect the system from cybersecurity threats and ensuring that it can withstand foreseeable operating conditions.
Transparency and Traceability: Providers must ensure that the AI system operates transparently, providing clear information on its capabilities, limitations, and decision-making processes. This includes maintaining detailed records to ensure traceability and accountability.
Data Governance: Providers must implement data governance measures to ensure the quality and integrity of data used by the AI system. This includes procedures for data collection, storage, and processing, as well as measures to protect data privacy and security.
3. Obligations of Importers
Importers must ensure that AI systems they place on the market comply with the requirements of the AI Act. This includes:
Verification of Conformity: Importers must verify that the provider has conducted the appropriate conformity assessment procedure and that the AI system meets all applicable requirements.
Technical Documentation and Information: Importers must ensure that the provider has prepared the necessary technical documentation and made it available upon request by national authorities.
Post-Market Monitoring and Reporting: Importers must monitor the performance of AI systems they place on the market and report any incidents or non-compliance to the relevant national authorities.
Contact Information: Importers must include their name, registered trade name or trademark, and contact address on the AI system or its packaging, ensuring that end-users and authorities can easily identify and contact them.
Storage and Transport: Importers must ensure that the AI system is stored and transported under conditions that do not affect its compliance with the requirements of the AI Act.
4. Obligations of Distributors
Distributors must verify that the AI systems they make available on the market comply with the requirements of the AI Act. This includes:
Verification of Compliance: Distributors must verify that the provider and importer have fulfilled their obligations under the Regulation, including the completion of the conformity assessment procedure and the availability of technical documentation.
Information to Authorities: Distributors must provide relevant information to national authorities upon request and cooperate with them to ensure compliance with the AI Act.
Storage and Transport: AI systems are stored and transported in conditions that do not affect their compliance with the requirements of the Regulation.
Post-Market Monitoring: Distributors must participate in post-market monitoring activities and report any incidents or non-compliance to the relevant national authorities.
Penalties
The Regulation mandates that Member States establish penalties for non-compliance that are effective, proportionate, and dissuasive. Specific measures include:
1. Fines
Non-compliance with the prohibition of AI practices referred to in Article 5 shall result in administrative fines of up to 35,000,000 EUR or, if the offender is an undertaking, up to 7% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
Non-compliance with other provisions related to operators or notified bodies (excluding those laid down in Article 5) shall be subject to administrative fines of up to 15,000,000 EUR or, if the offender is an undertaking, up to 3% of its total worldwide annual turnover for the preceding financial year, whichever is higher. This includes obligations under:
Article 16 (obligations of providers),
Article 22 (obligations of authorised representatives),
Article 23 (obligations of importers),
Article 24 (obligations of distributors),
Article 26 (obligations of deployers),
Articles 31, 33(1), 33(3), 33(4), or 34 (requirements and obligations of notified bodies),
Article 50 (transparency obligations for providers and users).
Supplying incorrect, incomplete, or misleading information to notified bodies or national competent authorities in response to a request shall result in administrative fines of up to 7,500,000 EUR or, if the offender is an undertaking, up to 1% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
2. Suspension or Withdrawal
In cases of serious non-compliance, Member States may suspend or withdraw AI systems from the market to prevent further infractions and mitigate any ongoing risks.
3. Corrective Actions
Providers of non-compliant AI systems may be required to undertake mandatory corrective actions to ensure conformity with the AI Act. This may involve updating system functionalities, revising operational processes, or enhancing data protection measures.
Formal Non-Compliance Measures:
The market surveillance authority of a Member State may mandate providers to address formal non-compliances such as improper CE marking, incorrect EU declaration of conformity, failure to register in the EU database, lack of an authorized representative, and unavailability of technical documentation. Persistent non-compliance can lead to further restrictions, prohibition, recall, or withdrawal of the high-risk AI system from the market.
4. Union AI Testing Support Structures
The Commission designates Union AI testing support structures to provide independent technical or scientific advice to market surveillance authorities.
Remedies
The Regulation ensures that individuals and entities affected by non-compliant AI systems have access to appropriate remedies, which include:
1. Complaints
Any natural or legal person who believes there has been an infringement of the Regulation can submit reasoned complaints to the relevant market surveillance authority. These complaints must be considered in the course of market surveillance activities and handled according to established procedures.
2. Judicial Redress
Affected individuals have the right to seek judicial redress for damages caused by non-compliant AI systems. This includes the right to obtain clear and meaningful explanations from the deployer of high-risk AI systems when a decision significantly affects their health, safety, or fundamental rights.
3. Right to Explanation
Individuals significantly affected by decisions based on high-risk AI systems listed in Annex III, with certain exceptions, are entitled to an explanation of the role of the AI system in the decision-making process and the main elements of the decision taken.
Protection of Whistleblowers
Persons reporting infringements of the Regulation are protected under Directive (EU) 2019/1937, ensuring they are safeguarded when reporting such violations
European Artificial Intelligence Board
The European Artificial Intelligence Board (the Board) is established to support the consistent application of the AI Regulation across the Union. The Board comprises representatives from:
National supervisory authorities responsible for the implementation of the Regulation.
The European Data Protection Supervisor.
The European Commission, which chairs the Board.
The Board's primary responsibilities include:
Advising and Assisting the Commission: The Board advises and assists the European Commission in matters related to AI regulation, including providing opinions and recommendations.
Promoting Cooperation: The Board promotes cooperation between national supervisory authorities to ensure consistent application and enforcement of the AI Act across Member States.
Issuing Guidelines and Recommendations: The Board issues guidelines, recommendations, and best practices to facilitate the implementation of the Regulation, ensuring a harmonized approach to AI governance.
Facilitating Exchange of Information: The Board facilitates the exchange of information among national authorities, enhancing the effectiveness of supervision and enforcement actions.
The Board operates based on internal rules of procedure, which detail its functioning, including decision-making processes and meeting schedules. The rules of procedure are adopted by a simple majority vote of the Board members.
The Board may establish subgroups to address specific issues or tasks. These subgroups are composed of Board members or external experts as needed. The establishment of subgroups must be approved by the Board.
National Supervisory Authorities
Each Member State must designate one or more national supervisory authorities responsible for monitoring the application of the AI Act. The responsibilities of national supervisory authorities include:
Monitoring and Enforcement: Ensuring that AI systems placed on the market or put into service in their jurisdiction comply with the Regulation.
Investigations and Inspections: Conducting investigations and inspections to verify compliance, including the power to access premises and documents.
Handling Complaints: Receiving and handling complaints from individuals and entities regarding potential non-compliance with the AI Act.
Imposing Penalties: Imposing administrative penalties and corrective measures for non-compliance, as outlined in the Regulation.
National supervisory authorities must operate independently and be free from external influence. Member States must ensure that these authorities have adequate resources, including financial, technical, and human resources, to effectively perform their duties.
* * * For more information on how the AI Regulation can ensure compliance and foster innovation within the web3 landscape, please reach out to us. Prokopiev Law Group, with its broad global network of partners, ensures your compliance worldwide. Popular legal inquiries in the web3 sector include regulatory compliance for decentralized finance (DeFi), NFT marketplaces, and blockchain gaming platforms. Our team is well-equipped to address these complexities and provide tailored legal solutions to navigate the evolving regulatory environment of web3 technologies. Contact us to ensure your web3 projects align with current legal standards and maximize their potential within the global market.
The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
Commentaires