Search Results
142 results found with an empty search
- ESMA Guidelines on the Conditions and Criteria for the Qualification of Crypto-Assets as Financial Instruments
The Guidelines apply to competent authorities, financial market participants, and any individual or entity engaged in crypto-asset activities. They seek to clarify how Article 2(5) of MiCA should be applied when determining whether a crypto-asset qualifies as a financial instrument. They come into force sixty days after their publication in all official EU languages on ESMA’s website, happened on March 19, 2025. Legislative References, Abbreviations, and Definitions Underpinning these Guidelines are several core pieces of legislation. The most central of these are MiFID II (Directive 2014/65/EU), AIFMD (Directive 2011/61/EU), MiCA (Regulation (EU) 2023/1114), UCITSD (Directive 2009/65/EC), the Money Market Fund Regulation (2017/1131/EU), and the ESMA Regulation (Regulation (EU) 1095/2010). They also make reference to DLTR (Regulation (EU) 2022/858), which governs pilot regimes for distributed ledger technology.Relevant abbreviations include AIF for Alternative Investment Fund, ART for Asset-Referenced Token, CASP for Crypto-Asset Service Provider, DLT for Distributed Ledger Technology, EMT for Electronic Money Token, and NFT for Non-Fungible Token. Classification of Crypto-Assets as Transferable Securities (Guideline 2) To determine whether a crypto-asset qualifies as a transferable security, it is important to verify whether the crypto-asset grants rights equivalent to those attached to shares, bonds, or other forms of securitised debt. The text of MiFID II (Article 4(1)(44)) underpins this assessment. According to the Guidelines, three main criteria must be cumulatively fulfilled. First, a crypto-asset must not be an instrument of payment, so if its sole use is as a medium of exchange, it would not qualify as a transferable security. Second, the crypto-asset must belong to a “class” of securities, meaning that it confers the same rights and obligations to all holders or else belongs to a distinct class within the issuance. Third, it must be “negotiable on the capital market,” which generally means that it can be freely transferred or traded, including on trading platforms equivalent to those covered by MiFID. If all these points are satisfied, then the crypto-asset should be classified as a transferable security and treated under the same rules that govern traditional instruments. Classification as Other Types of Financial Instruments Money-Market Instruments (Guideline 3) A crypto-asset that would be considered a money-market instrument must normally be traded on the money market and should not serve merely as an instrument of payment. The crypto-asset should exhibit features akin to short-term negotiable debt obligations, such as treasury bills or commercial paper, and typically have a short maturity or a fixed redemption date. An example might be a token representing a certificate of credit balance repayable within a short timeframe, though it must be clearly distinguishable from mere payment tools. Units in Collective Investment Undertakings (Guideline 4) A crypto-asset qualifies as a unit or share in a collective investment undertaking if it involves pooling capital from multiple investors, follows a predefined investment policy managed by a third party, and pursues a pooled return for the benefit of those investors. The focus is on whether participants lack day-to-day discretion over how the capital is managed and whether the project is not purely commercial or industrial in purpose. An example would be a token representing ownership in a fund-like structure that invests in a portfolio of other digital or traditional assets; if it meets the criteria from existing definitions in AIFMD and UCITSD (excluding pure payment or operational tools), it may be deemed a collective investment undertaking. Derivative Contracts (Guideline 5) The Guidelines recognize two broad scenarios for derivatives: crypto-assets can serve as the underlying asset for a derivative contract, or they can themselves be structured as derivative contracts. In both cases, reference must be made to Annex I Section C (4)-(10) of MiFID II, which identifies features such as a future commitment (forward, option, swap, or similar) and a value derived from an external reference point, such as a commodity price, interest rate, or another crypto-asset. Whether the derivative settles in fiat or crypto is not decisive if the essential characteristics of a derivative are present. This includes perpetual futures or synthetic tokens that track an index or basket of assets, provided they fit into one of MiFID II’s derivative categories. Emission Allowances (Guideline 6) A crypto-asset may be considered an emission allowance if it represents a right to emit a set amount of greenhouse gases recognized under the EU Emissions Trading Scheme, in line with Directive 2003/87/EC. If the token is interchangeable with official allowances and can be used to meet compliance obligations, it should then be regulated under MiFID II as an emission allowance. On the other hand, self-declared carbon credits or voluntary offsets that are not recognized by EU authorities do not fall under this category. Background on the Notion of Crypto-Assets Classification as Crypto-Assets (Guideline 7) The Guidelines reiterate that a crypto-asset, in general, is a digital representation of value or rights, transferred and stored via DLT. If it cannot be transferred beyond the issuer or if it is purely an instrument of payment, it typically falls outside the scope of these financial-instrument rules. Moreover, the fact that a holder anticipates profit from a token’s appreciation is not by itself sufficient to qualify the token as a financial instrument. Crypto-Assets That Are Unique and Non-Fungible (NFTs) (Guideline 8) Non-fungible tokens, which are unique and not interchangeable with each other, are excluded from MiCA provided they genuinely fulfill the requirement of uniqueness. This means having distinctive characteristics or rights that cannot be matched by any other asset in the same issuance. Merely assigning a unique technical identifier to each token is not enough to establish non-fungibility if the tokens effectively grant identical rights and are indistinguishable in economic reality. Fractionalizing an NFT into multiple tradable pieces typically renders those fractional parts non-unique unless each part has distinct attributes of its own. Hybrid Crypto-Assets (Guideline 9) Some tokens combine features typical of multiple crypto-asset categories, such as partial investment features (like profit participation) alongside a utility function (like access to a digital service). If, on closer assessment, any component of the token fits the definition of a financial instrument under MiFID II, the financial instrument classification applies, taking precedence over other labels. The Guidelines thus underline that hybrid tokens must be evaluated under a substance-over-form approach, with a focus on their actual rights, obligations, and economic features rather than how the issuer labels them. Conclusion Taken as a whole, the Guidelines demonstrate ESMA’s intention to ensure that all tokens conferring rights equivalent to conventional financial instruments are appropriately supervised under MiFID II. Although labels such as “utility” or “NFT” may be used by issuers, the ultimate question is whether the token’s real-world function and associated rights align with those of a security, a derivative, or another regulated category. By following this approach, authorities and market participants can maintain consistent, technology-neutral regulation in the fast-evolving crypto-asset space. Prokopiev Law Group stays at the forefront of Web3 compliance and regulatory intelligence, offering strategic support across NFT legal solutions, DAO governance, DeFi compliance, token issuance, crypto KYC, and smart contract audits. Leveraging a broad global network of partners, we ensure your project meets evolving regulations worldwide, including in the EU, US, Singapore, Switzerland, and the UK. If you want tailored guidance to protect your interests and remain future-proof, write to us for more information. Reference: Guidelines on the conditions and criteria for the qualification of crypto-assets as financial instruments The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- E-Money and Electronic Money Tokens (EMTs)
How do electronic money (e-money) and electronic money tokens (EMTs) differ, and what are the regulatory frameworks governing them within the European Economic Area (EEA)? Definition and Regulation of E-Money Tokens (EMTs) E-Money Tokens (EMTs): EMTs are a specific type of crypto-asset, their value typically pegged to a single fiat currency such as the Euro or US Dollar. These crypto-assets represent digital value or rights that can be transferred and stored electronically through distributed ledger technology (DLT) or similar systems. DLT operates as a synchronized information repository shared across multiple network nodes. Regulatory Framework: The Markets in Crypto-Assets Regulation EU 2023/1114 (MiCA) outlines stringent conditions for the issuance of EMTs. Key points include: EMTs can only be issued by credit institutions or Electronic Money Institutions (EMIs) regulated by an EEA regulator. MiCA came into effect in June 2023 and will be fully applicable from December 30, 2024. Issuer Obligations Under MiCA: Prudential, Organizational, and Conduct Requirements: Issuers must adhere to specific prudential standards, organizational requirements, and business conduct rules, including: Issuing EMTs at par value. Granting holders redemption rights at par value. Prohibiting the granting of interest on EMTs. White Paper Requirements: Issuers are mandated to publish a white paper with detailed information such as: Issuer details: Name, address, registration date, parent company (if applicable), and potential conflicts of interest. EMT specifics: Name, description, and details of developers. Public offer details: Total number of units offered. Rights and obligations: Redemption rights and complaints handling procedures. Underlying technology. Associated risks and mitigation measures. Significant e-money tokens (EMTs) are subject to higher capital requirements and enhanced oversight by the European Banking Authority (EBA). Significant EMTs are defined as those which can scale up significantly, potentially impacting financial stability, monetary sovereignty, and monetary policy within the EU. The EBA mandates that issuers of significant EMTs hold additional capital reserves. Specifically, significant issuers must maintain capital that is the higher of either €2 million or 3% of the average reserve assets. The EBA monitors these issuers closely, requiring detailed reports on their financial health and risk management practices. Issuers of significant EMTs must also adhere to comprehensive reporting obligations. They need to provide regular updates on their liquidity positions, stress testing results, and compliance with redemption obligations. Definition and Regulation of Electronic Money Electronic Money (E-Money): E-money is defined as electronically or magnetically stored monetary value representing a claim on the issuer. Its characteristics include: Issued upon receipt of funds for the purpose of payment transactions. Accepted by entities other than the issuer. Not excluded by Regulation 5 of the European Communities (Electronic Money) Regulations 2011 (EMI Regulations). Exclusions Under Regulation 5: The EMI Regulations exclude monetary value stored on specific payment instruments with limited use and monetary value used for specific payment transactions by electronic communications service providers. Electronic Money Institutions (EMIs): An EMI is an entity that has been authorized to issue e-money under the EMI Regulations, which is necessary for any e-money issuance within the EEA. Comparative Analysis of E-Money and EMTs Definition: E-Money: Electronically stored monetary value represented by a claim on the issuer. EMTs: Crypto-assets whose value is usually linked to a single fiat currency. Issuers: E-Money: Issued by EMIs upon receipt of funds for making payment transactions. EMTs: Issued by EMIs and/or credit institutions. Legal Regime: E-Money: Governed by the European Communities (Electronic Money) Regulations 2011. EMTs: Governed by MiCA. Status: E-Money: Not necessarily an EMT, but can be depending on how it is transferred and stored. EMTs: All EMTs are also considered e-money. To ensure compliance with the latest regulations and navigate the Web3 legal landscape, please contact Prokopiev Law Group. Our expertise in cryptocurrency law, smart contracts, and regulatory compliance, combined with our extensive global network of partners, guarantees that your business adheres to both local and international standards. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- Cyprus Opens Submitting for MiCA License
The Cyprus Securities and Exchange Commission (CySEC) has initiated a preliminary assessment phase for Crypto-Asset Service Providers (CASPs) applying under the forthcoming EU Markets in Crypto-Assets Regulation (MiCA). Effective today, November 13, 2024 , CASPs in Cyprus can submit applications to CySEC in preparation for MiCA’s full implementation on December 30, 2024. This step by CySEC aligns with the MiCA framework, a regulation setting standardized rules for crypto-asset markets across the EU. As part of this preliminary phase, CySEC has made application and notification forms accessible on its website for CASPs and other financial entities authorized under Article 60 of MiCA, including investment firms, UCITS managers, and alternative investment fund managers, to submit notifications or seek authorization under Article 63. Important Points for this Preliminary Phase: During this phase, CySEC will receive applications from both entities currently regulated under Cyprus’ national crypto-asset laws and new market entrants aiming for MiCA compliance. While accepting applications early, CySEC retains the discretion to prioritize applications, particularly for entities already regulated under existing Cyprus’ crypto-asset rules. Submissions during this preliminary phase will be officially considered upon completion of formalities, including fee payment and verification of information accuracy by December 30, 2024. CySEC will make final decisions on granting/refusing authorization, as well as on the completeness of submitted notifications, after MiCA officially applies to CASPs on December 30, 2024. Reminder of Transitional Measures and Applicability Dates CySEC also reminds interested parties of a recent announcement regarding MiCA's phased applicability. MiCA became effective for issuers of Asset-Referenced Tokens (ARTs) and E-Money Tokens (EMTs) on June 30, 2024, and will extend to CASPs on December 30, 2024. Under MiCA’s transitional measures, CASPs registered under National Rules before December 30, 2024, may continue to provide their services until July 1, 2026, or until CySEC grants or refuses authorization per Article 63, whichever is sooner. Additionally, as of October 17, 2024, CySEC ceased accepting any CASP applications for registration under National Rules in view of MiCA becoming applicable to CASPs on 30 December 2024. So that, CySEC’s early application phase for MiCA is helping crypto service providers in Cyprus get ready for new EU rules, making the transition easier and clearer for everyone involved. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- MiCA comes fully into force: MiCA register was published
The EU's Markets in Crypto-Assets Regulation (MiCA) came into full effect on 30 December 2024 , following its initial entry into force on 29 June 2023. MiCA establishes the EU as the first major jurisdiction to regulate crypto-assets comprehensively. It creates a harmonized framework for crypto-asset issuance and services, covering various types of crypto-assets such as asset-referenced tokens (ARTs), electronic-money tokens (EMTs) and other crypto-assets (this blanket category covers utility tokens and other crypto-assets that don't qualify as ARTs or EMTs). This regulation also introduces a pan-European licensing and supervisory system for issuers, platforms, and crypto-asset service providers (CASPs). Notably, Titles III and IV, dealing with ARTs and EMTs, were applied from 30 June 2024. As of 30 December 2024, the European Securities and Markets Authority (ESMA) is empowered under Articles 109 and 110 of the MiCA Regulation to maintain and publish a central register of crypto-asset white papers, authorized crypto-asset service providers (CASPs), and non-compliant entities. This register will be sourced from the relevant National Competent Authorities (NCAs) and the European Banking Authority (EBA). To meet the legal deadline, ESMA has created an interim MiCA register , which will be updated and republished regularly. This interim register, accessible on the MiCA webpage and the Databases and Registers page, will be available as a collection of CSV files until mid-2026, when it will be formally integrated into ESMA’s IT systems. The interim register includes five CSV files, which cover: · White papers for crypto-assets other than asset-referenced tokens (ARTs) and e-money tokens (EMTs) (Title II) · Issuers of asset-referenced tokens (Title III) · Issuers of e-money tokens (Title IV) · Authorized crypto-asset service providers (Title V) · Non-compliant entities providing crypto-asset services Although four of the five files in the interim MiCA register are currently empty, the file related to issuers of EMTs contains crucial information. As of January 06, 2025, this file lists companies that have obtained authorization to issue e-money tokens under MiCA . Companies like: Membrane Finance Oy; Circle Internet Financial Europe SAS; Société Générale – Forge; Banking Circle S.A.; Quantoz Payments B.V. and Fiat Republic Netherlands B.V. are included in this file, showcasing their official approval status and providing access to their relevant white papers, authorization dates, and other key details. ESMA will update the register on a monthly basis , and while information will be reported by competent authorities on a rolling basis, it will not appear in the register immediately. Records in the interim MiCA register will reflect the information provided by the relevant authorities. If an authorization is withdrawn by a competent authority, the record will remain in the register, noting the date when the withdrawal took effect. With the establishment of the interim MiCA register and its regular updates, the European Union continues to lead the way in creating a transparent and compliant digital finance environment. We will continue to monitor and report on further updates to the MiCA framework and its impact on the crypto industry. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- The Commission published Guidelines on AI system definition
The European Commission (the ‘Commission’) issued the Guidelines on the definition of an artificial intelligence system under Regulation (EU) 2024/1689 (“AI Act”). The AI Act entered into force on 1 August 2024; it lays down harmonised rules for the development, placing on the market, putting into service, and use of artificial intelligence (‘AI’) in the Union. The Guidelines focus on clarifying Article 3(1) AI Act, which defines an “AI system” and therefore determines the scope of the AI Act. They are meant to help providers and other relevant persons (including market and institutional stakeholders) decide whether a specific system meets the definition of an AI system. They emphasize that the definition took effect on 2 February 2025 , alongside relevant provisions (Chapters I and II, including prohibited AI practices under Article 5). The Guidelines are not legally binding ; the ultimate interpretation belongs to the Court of Justice of the European Union. Key Elements of the AI System Definition AI System Article 3 (1) of the AI Act defines an AI system as follows:‘“AI system” means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;’ According to the Guidelines, this definition comprises seven main elements: A machine-based system; Designed to operate with varying levels of autonomy; That may exhibit adaptiveness after deployment; For explicit or implicit objectives; Infers, from the input it receives, how to generate outputs; Such outputs include predictions, content, recommendations, or decisions; Which can influence physical or virtual environments. These elements should be interpreted with an understanding that AI systems exhibit machine-driven functionality, some autonomy, and possibly self-learning capabilities, but always within a context of producing outputs that “can influence” their surroundings. Machine-Based System The term ‘machine-based’ refers to the fact that AI systems are developed with and run on machines… The hardware components refer to the physical elements of the machine… The software components encompass computer code, instructions, programs, operating systems, and applications…” The Guidelines clarifies that “All AI systems are machine-based…” to emphasize computational processes (model training, data processing, large-scale automated decisions). This covers a wide variety of computational systems, including advanced quantum ones. Autonomy The second element of the definition refers to the system being ‘designed to operate with varying levels of autonomy’. Recital 12 of the AI Act clarifies that the terms ‘varying levels of autonomy’ mean that AI systems are designed to operate with ‘some degree of independence of actions from human involvement and of capabilities to operate without human intervention’. Full manual human involvement excludes a system from being considered AI. A system needing manual inputs to generate an output can still have “some degree of independence of action,” making it an AI system. Autonomy and risk considerations become particularly important in high-risk use contexts (as listed in Annex I and Annex III of the AI Act). Adaptiveness “(22) The third element… is that the system ‘may exhibit adaptiveness after deployment’. … ‘adaptiveness’ refers to self-learning capabilities, allowing the behaviour of the system to change while in use.” The word “may” means adaptiveness is not mandatory for a system to be classified as AI. Even if a system does not automatically adapt post-deployment, it may still qualify if it meets the other criteria. AI System Objectives “(24) The fourth element… AI systems are designed to operate according to one or more objectives. The objectives… may be different from the intended purpose of the AI system in a specific context.” Objectives are internal to the system, such as maximizing accuracy. The intended purpose (Article 3(12) AI Act) is external, reflecting the practical use context. Inferencing How to Generate Outputs “(26) The fifth element of an AI system is that it must be able to infer, from the input it receives, how to generate outputs. …This capability to infer is therefore a key, indispensable condition that distinguishes AI systems from other types of systems.” The capacity to derive models, algorithms, and outputs from data sets AI apart from simpler software that “automatically execute[s] operations” via predefined rules alone. AI Techniques that Enable Inference “(30) Focusing specifically on the building phase… ‘machine learning approaches that learn from data how to achieve certain objectives, and logic- and knowledge-based approaches that infer from encoded knowledge or symbolic representation of the task to be solved.’ Those techniques should be understood as ‘AI techniques’.” Machine Learning approaches: Supervised (e.g., spam detection) Unsupervised (e.g., drug discovery) Self-supervised (e.g., predicting missing pixels, language models) Reinforcement (e.g., autonomous vehicles, robotics) Deep Learning (e.g., large neural networks) Logic- and Knowledge-Based approaches: Use encoded knowledge , symbolic rules, and reasoning engines. The Guidelines cite examples such as classical natural language processing models based on grammatical logic, expert systems for medical diagnosis, etc. Systems Outside the Scope “(40) Recital 12 also explains that the AI system definition should distinguish AI systems from ‘simpler traditional software systems or programming approaches and should not cover systems that are based on the rules defined solely by natural persons to automatically execute operations.’” Systems aimed at improving mathematical optimization (e.g., accelerating well-established linear regression methods, parameter tuning in satellite telecommunication systems) remain outside if they do not “transcend ‘basic data processing.’” Basic data processing (sorting, filtering, static descriptive analysis, or visualizations) with no learning or reasoning also does not qualify. “Systems based on classical heuristics” (experience-based problem-solving that is not data-driven learning) are excluded. Simple prediction systems, employing trivial estimations or benchmarks (e.g., always predict the mean) do not meet the threshold for “AI system” performance. Outputs That Can Influence Physical or Virtual Environments “(52) The sixth element… the system infers ‘how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments’. … The capacity to generate outputs… is fundamental to what AI systems do and what distinguishes those systems from other forms of software.” The Guidelines detail four output categories: Predictions Content Recommendations Decisions Each type represents increasing levels of automatic functionality. Systems that produce these outputs from learned or encoded approaches generally fit the AI criteria. Interaction with the Environment “(60) The seventh element of the definition of an AI system is that system’s outputs ‘can influence physical or virtual environments’. That element should be understood to emphasise the fact that AI systems are not passive, but actively impact the environments in which they are deployed.” Influence may be physical (like controlling a robot arm) or digital (e.g., altering a user interface or data flows). Concluding Remarks “(61) The definition of an AI system encompasses a wide spectrum of systems. The determination of whether a software system is an AI system should be based on the specific architecture and functionality of a given system…” “(63) Only certain AI systems are subject to regulatory obligations and oversight under the AI Act. …The vast majority of systems, even if they qualify as AI systems… will not be subject to any regulatory requirements under the AI Act.” This underscores the risk-based approach in the AI Act: most AI systems face no or minimal obligations, while high-risk systems come under stricter prohibitions (Article 5), conformity requirements (Chapter II, Article 6), or transparency rules (Article 50). The Guidelines highlight that general-purpose AI models also fall under the AI Act (Chapter V), but detailed distinctions between them and “AI systems” exceed the scope of these Guidelines. Overall, these Guidelines precisely delineate what qualifies as an AI system. They serve as a structured reference for developers, providers, and other stakeholders to assess whether a given solution falls under Regulation (EU) 2024/1689. Link: Guidelines on the definition of an artificial intelligence system established by Regulation (EU) 2024/1689 (AI Act) If you need further guidance on AI compliance, DeFi compliance, NFT compliance, DAO governance, Metaverse regulations, MiCA regulation, stablecoin regulation, or any other web3 legal matters, write to us. Prokopiev Law Group has a broad global network of partners, ensuring your compliance worldwide, including in the EU, US, Singapore, Switzerland, Hong Kong, and Dubai. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- Guidelines on prohibited artificial intelligence (AI) practices
The Commission Guidelines on prohibited artificial intelligence practices established by Regulation (EU) 2024/1689 (AI Act) (“the Guidelines”) were officially published on 04 February 2025. They provide an interpretation of the practices banned by Article 5 AI Act. These Guidelines are non-binding but form a crucial reference for providers, deployers, and authorities tasked with implementing the AI Act’s rules. SCOPE, RATIONALE, AND ENFORCEMENT Scope of the Guidelines “(1) Regulation (EU) 2024/1689 of the European Parliament and the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)(‘the AI Act’)1 entered into force on 1 August 2024. The AI Act lays down harmonised rules for the placing on the market, putting into service, and use of artificial intelligence(‘AI’) in the Union.” (Section (1) of the Guidelines) "(5) These Guidelines are non-binding. Any authoritative interpretation of the AI Act may ultimately only be given by the Court of Justice of the European Union (‘CJEU’).” (Section (5) of the Guidelines) According to section (1) of the Guidelines, the AI Act follows a risk-based approach, classifying AI systems into four risk categories: unacceptable risk, high risk, transparency risk, and minimal/no risk. Article 5 AI Act deals exclusively with “AI systems posing unacceptable risks to fundamental rights and Union values” (section (2) of the Guidelines). Additionally, the Guidelines clarify in section (6) that they are “regularly reviewed in light of the experience gained from the practical implementation of Article 5 AI Act and technological and market developments.” Their material scope and addressees are set out in sections (11)–(14) and (15)–(20) respectively. Rationale for Prohibiting Certain AI Practices "(8) Article 5 AI Act prohibits the placing on the EU market, putting into service, or use of certain AI systems for manipulative, exploitative, social control or surveillance practices, which by their inherent nature violate fundamental rights and Union values."(Section (8) of the Guidelines) In section (9), the Guidelines enumerate eight distinct prohibitions in Article 5(1), grounded on the AI Act’s principle that certain technologies or uses “contradict the Union values of respect for human dignity, freedom, equality, democracy, and the rule of law, as well as fundamental rights enshrined in the Charter” (section (8) of the Guidelines). As explained in section (28) of the Guidelines, the rationale is that unlawful AI-based surveillance, manipulative or exploitative systems, and unfair scoring or profiling schemes “are particularly harmful and abusive and should be prohibited because they contradict the Union values of respect for human dignity, freedom, equality, democracy, and the rule of law.” These prohibitions also respond to rapid AI developments that can facilitate large-scale data processing, possibly leading to heightened surveillance, discrimination, and erosion of autonomy. Section (4) of the Guidelines states that the prohibitions “should serve as practical guidance to assist competent authorities under the AI Act in their enforcement activities, as well as providers and deployers of AI systems in ensuring compliance.” Enforcement of Article 5 AI Act "(53) Market surveillance authorities designated by the Member States as well as the European Data Protection Supervisor (as the market surveillance authority for the EU institutions, agencies and bodies) are responsible for the enforcement of the rules in the AI Act for AI systems, including the prohibitions." (Section (53) of the Guidelines) "(54) …Those authorities can take enforcement actions in relation to the prohibitions on their own initiative or following a complaint, which every affected person or any other natural or legal person having grounds to consider such violations has the right to lodge. …Member States must designate their competent market surveillance authorities by 2 August 2025." (Section (54) of the Guidelines) As explained in section (53) of the Guidelines, enforcement occurs under the structure laid down by Regulation (EU) 2019/1020, adapted for AI. National market surveillance authorities will supervise compliance and “can take enforcement actions … or following a complaint” (section (54) of the Guidelines). Where cross-border implications arise, “the authority of the Member State concerned must inform the Commission and the market surveillance authorities of other Member States,” triggering a possible Union safeguard procedure (section (54)–(55) of the Guidelines). Penalties for Violations "(55) Since violations of the prohibitions in Article 5 AI Act interfere the most with the freedoms of others and give rise to the highest fines, their scope should be interpreted narrowly." (Section (57) of the Guidelines, referencing discussion on fines) "(55) …Providers and deployers engaging in prohibited AI practices may be fined up to EUR 35 000 000 or, if the offender is an undertaking, up to 7 % of its total worldwide annual turnover for the preceding financial year, whichever is higher." (Section (55) of the Guidelines) Section (55) of the Guidelines notes that Article 99 AI Act sets out “a tiered approach … with the highest fines” reserved for breaches of Article 5. This penalty regime underscores the crucial nature of compliance with the prohibitions. Furthermore, according to section (56) of the Guidelines, the “principle of ne bis in idem should be respected” if the same prohibited conduct infringes multiple AI Act provisions. Applicability Timeline and Legal Effect "(430) According to Article 113 AI Act, Article 5 AI Act applies as from 2 February 2025. The prohibitions in that provision will apply in principle to all AI systems regardless of whether they were placed on the market or put into service before or after that date."(Section (430) of the Guidelines) As stated in section (431) of the Guidelines, enforcement and penalties become fully applicable six months after entry into force (on 2 August 2025). Section (432) of the Guidelines clarifies that even though certain aspects of the enforcement framework only take effect on 2 August 2025, “the prohibitions themselves have direct effect” as from 2 February 2025. Affected persons may seek relief in national courts against prohibited AI practices even in the interim period. Cooperation with Other Union Legislation According to sections (42)–(52) of the Guidelines, the prohibitions interact with other EU measures, such as consumer law, data protection, and non-discrimination instruments. In particular, data protection authorities may issue guidance or take enforcement actions for personal data infringements “alongside or in addition to” AI Act breaches. In short, enforcement is a multi-level process: Providers must ensure compliance prior to placing AI systems on the market. Deployers must ensure compliance during use, refraining from prohibited practices. Market surveillance authorities coordinate oversight, able to impose fines and other measures against infringements. PROHIBITED AI PRACTICES (ARTICLE 5 AI ACT) Article 5 AI Act: General Prohibition and Rationale “(8) Article 5 AI Act prohibits the placing on the EU market, putting into service, or use of certain AI systems for manipulative, exploitative, social control or surveillance practices, which by their inherent nature violate fundamental rights and Union values.” (Section (8) of the Guidelines) “Recital 28 AI Act clarifies that such practices are particularly harmful and abusive and should be prohibited because they contradict the Union values of respect for human dignity, freedom, equality, democracy, and the rule of law, as well as fundamental rights enshrined in the Charter of Fundamental Rights of the European Union.” (Section (8) of the Guidelines) According to section (8) of the Guidelines, the legislator identified certain “unacceptable risks” posed by specific AI uses — practices deemed inherently incompatible with fundamental rights, including the rights to privacy, autonomy, non-discrimination, and human dignity. Prohibitions Listed in Article 5 AI Act According to section (9) of the Guidelines, the AI Act enumerates eight prohibitions in Article 5(1). The Guidelines emphasize that these prohibitions “apply to the placing on the market, putting into service, or use of certain AI systems for manipulative, exploitative, social control or surveillance practices, which by their inherent nature violate fundamental rights and Union values” (section (8)). Unless a specific exception applies, these AI systems cannot be provided or deployed in the Union. Below is the full text of each prohibition as presented in the Guidelines: Article 5(1)(a) – Harmful manipulation, and deception “AI systems that deploy subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective or with the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm.” Article 5(1)(b) – Harmful exploitation of vulnerabilities “AI systems that exploit vulnerabilities due to age, disability or a specific social or economic situation, with the objective or with the effect of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm.” Article 5(1)(c) – Social scoring “AI systems for the evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to detrimental or unfavourable treatment in unrelated social contexts and/or unjustified or disproportionate treatment to the gravity of the social behaviour, regardless of whether provided or used by public or private persons.” Article 5(1)(d) – Individual criminal offence risk assessment and prediction “AI systems for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics; except to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to that criminal activity.” Article 5(1)(e) – Untargeted scraping of facial images “AI systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.” Article 5(1)(f) – Emotion recognition “AI systems that infer emotions of a natural person in the areas of workplace and education institutions, except where the use is intended to be put in place for medical or safety reasons.” Article 5(1)(g) – Biometric categorisation “AI systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex-life or sexual orientation; except any labelling or filtering of lawfully acquired biometric datasets, including in the area of law enforcement.” Article 5(1)(h) – Real-time Remote Biometric Identification (RBI) Systems for Law Enforcement Purposes “The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in so far as such use is strictly necessary for one of the following objectives: (i) the targeted search for specific victims of abduction, trafficking in human beings or sexual exploitation of human beings, as well as the search for missing persons; (ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or a genuine and present or genuine and foreseeable threat of a terrorist attack; (iii) the localisation or identification of a person suspected of having committed a criminal offence, for the purpose of conducting a criminal investigation or prosecution or executing a criminal penalty for offences referred to in Annex II and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least four years. …” These eight prohibitions, as clarified in section (9) of the Guidelines, constitute “unacceptable risks” under Article 5 AI Act. Providers and deployers must refrain from making available or using AI systems that meet any of these descriptions, unless the AI Act itself provides for a narrowly interpreted exception (e.g., certain uses of real-time RBI for law enforcement). Legal Basis and Material Scope “(10) The AI Act is supported by two legal bases: Article 114 of the Treaty on the Functioning of the European Union (‘TFEU’) (the internal market legal basis) and Article 16 TFEU (the data protection legal basis).” (Section (10) of the Guidelines) “(11) The practices prohibited by Article 5 AI Act relate to the placing on the market, the putting into service, or the use of specific AI systems.” (Section (11) of the Guidelines) According to section (10) of the Guidelines, some prohibitions (notably the ban on real-time remote biometric identification for law enforcement, biometric categorisation, and individual risk assessments in law enforcement) derive from Article 16 TFEU, ensuring data protection. Others rely on Article 114 TFEU for the internal market. Sections (12) through (14) clarify: “Placing on the market” means the first supply of an AI system in the EU (section (12)). “Putting into service” refers to the first use in the EU for its intended purpose (section (13)). “Use” is interpreted “in a broad manner” (section (14)) to include any operation or deployment of an AI system after it is placed on the market/put into service. Personal Scope: Responsible Actors “(15) The AI Act distinguishes between different categories of operators in relation to AI systems: providers, deployers, importers, distributors, and product manufacturers.” (Section (15) of the Guidelines) “(16) According to Article 3(3) AI Act, providers are natural or legal persons … that develop AI systems or have them developed and place them on the Union market, or put them into service under their own name or trademark.” (Section (16) of the Guidelines) “(17) Deployers are natural or legal persons, public authorities, agencies or other bodies using AI systems under their authority, unless the use is for a personal non-professional activity.” (Section (17) of the Guidelines) Sections (15)–(20) of the Guidelines explain how these roles “may overlap”, but each actor faces specific obligations for compliance with the prohibitions. In particular: Providers must ensure their AI system is not prohibited upon placing it on the market or putting it into service. Deployers must avoid usage scenarios that fall within a prohibited practice, even if the provider excludes it in the terms of use (section (14) of the Guidelines). Exclusion from the Scope of the AI Act “(21) Article 2 AI Act provides for a number of general exclusions from scope which are relevant for a complete understanding of the practical application of the prohibitions listed in Article 5 AI Act.” (Section (21) of the Guidelines) (22) to (36) of the Guidelines specify exclusions such as national security, military or defence uses (Article 2(3)), judicial or law enforcement cooperation with third countries under certain agreements (Article 2(4)), R&D activities not placed on the market (Article 2(8)), and personal non-professional activities (Article 2(10)). Interplay with Other Provisions and Union Law “(37) The AI practices prohibited by Article 5 AI Act should be considered in relation to the AI systems classified as high-risk … In some cases, a high-risk AI may also qualify as a prohibited practice … if all conditions under one or more of the prohibitions … are fulfilled.” (Section (37) of the Guidelines) “(42) The AI Act is a regulation that applies horizontally across all sectors without prejudice to other Union legislation, in particular on the protection of fundamental rights, consumer protection, employment, the protection of workers, and product safety.” (Section (42) of the Guidelines) Sections (37)–(52) clarify: Some systems not meeting the threshold for prohibition might still be “high-risk” under Article 6 AI Act or subject to other EU laws (section (37)). The AI Act does not override data protection, consumer law, or non-discrimination statutes; these still apply (sections (42)–(45)). The highest fines apply to breaches of Article 5 (section (55) of the Guidelines). Enforcement Timeline “(430) According to Article 113 AI Act, Article 5 AI Act applies as from 2 February 2025. … all providers and deployers engaging in prohibited AI practices may be subject to penalties, including fines up to 7 % of annual worldwide turnover for undertakings.”(Sections (430) and (55) of the Guidelines) Even though full market surveillance mechanisms launch on 2 August 2025, the prohibitions (Article 5) are in force as of 2 February 2025. Affected individuals and authorities can invoke Article 5 bans immediately after that date (sections (430)–(432) of the Guidelines). HARMFUL MANIPULATION AND EXPLOITATION (ARTICLE 5(1)(A) AND (B)) Rationale and Objectives “(58) The first two prohibitions in Article 5(1)(a) and (b) AI Act aim to safeguard individuals and vulnerable persons from the significantly harmful effects of AI-enabled manipulation and exploitation. Those prohibitions target AI systems that deploy subliminal, purposefully manipulative or deceptive techniques that are significantly harmful and materially influence the behaviour of natural persons or group(s) of persons (Article 5(1)(a) AI Act) or exploit vulnerabilities due to age, disability, or a specific socio-economic situation (Article 5(1)(b) AI Act).”(Section (58) of the Guidelines) According to section (59) of the Guidelines: "(59) The underlying rationale of these prohibitions is to protect individual autonomy and well-being from manipulative, deceptive, and exploitative AI practices that can subvert and impair an individual’s autonomy, decision-making, and free choices. … The prohibitions aim to protect the right to human dignity (Article 1 of the Charter), which also constitutes the basis of all fundamental rights and includes individual autonomy as an essential aspect." In section (59), the Guidelines also stress that Articles 5(1)(a) and (b) AI Act “fully align with the broader objectives of the AI Act to promote trustworthy and human-centric AI systems that are safe, transparent, fair and serve humanity and align with human agency and EU values.” Article 5(1)(a) AI Act – Harmful Manipulation, Deception “(60) Several cumulative conditions must be fulfilled for the prohibition in Article 5(1)(a) AI Act to apply: (i) The practice must constitute the ‘placing on the market’, ‘the putting into service’, or the ‘use’ of an AI system. (ii) The AI system must deploy subliminal (beyond a person's consciousness), purposefully manipulative or deceptive techniques. (iii) The techniques deployed by the AI system should have the objective or the effect of materially distorting the behaviour of a person or a group of persons. … (iv) The distorted behaviour must cause or be reasonably likely to cause significant harm to that person, another person, or a group of persons.” (Section (60) of the Guidelines) According to section (63), the prohibition covers three broad technique types: Subliminal techniques “beyond a person’s consciousness.” Purposefully manipulative techniques “designed or objectively aim to influence … in a manner that undermines individual autonomy.” Deceptive techniques “involving presenting false or misleading information with the objective or the effect of deceiving individuals.” As section (70) of the Guidelines notes, the deception arises from “presenting false or misleading information in ways that aim to or have the effect of deceiving individuals and influencing their behaviour in a manner that undermines their autonomy, decision-making and free choices.” Significant Harm and Material Distortion “(77) The concept of ‘material distortion of the behaviour’ of a person or a group of persons is central to Article 5(1)(a) AI Act. It involves the deployment of subliminal, purposefully manipulative or deceptive techniques that are capable of influencing people’s behaviour in a manner that appreciably impairs their ability to make an informed decision … leading them to behave in a way that they would otherwise not have.”(Section (77) of the Guidelines) “(86) The AI Act addresses various types of harmful effects associated with manipulative and deceptive AI systems … The main types of harms relevant for Article 5(1)(a) AI Act include physical, psychological, financial, and economic harms.”(Section (86) of the Guidelines) Section (85) summarizes that for the prohibition to apply, the harm must be “significant”, and “there must be a plausible/reasonably likely causal link between the manipulative or deceptive technique … and the potential significant harm.” Article 5(1)(b) AI Act – Exploitation of Vulnerabilities “(98) Article 5(1)(b) AI Act prohibits the placing on the market, the putting into service, or the use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm.”(Section (98) of the Guidelines) As explained in section (101): “(101) To fall within the scope of the prohibition in Article 5(1)(b) AI Act, the AI system must exploit vulnerabilities inherent to certain individuals or groups of persons due to their age, disability or socio-economic situations, making them particularly susceptible to manipulative and exploitative practices.” Sections (104)–(112) detail the specific vulnerabilities tied to: Age (children, older persons), Disability (cognitive, physical, mental impairments), Specific socio-economic situation (e.g., extreme poverty, socio-economically disadvantaged, migrants). Section (114) clarifies that the harm must again be “significant,” and (115) states: "(115) For vulnerable groups — children, older persons, persons with disabilities, and socio-economically disadvantaged populations — these harms may be particularly severe and multifaceted due to their heightened susceptibility to exploitation." Interplay Between Article 5(1)(a) and (b) “(122) The interplay between the prohibitions in Article 5(1)(a) and (b) AI Act requires the delineation of the specific contexts that each provision covers to ensure that they are applied in a complementary manner.”(Section (122) of the Guidelines) Section (123) of the Guidelines describes that: Article 5(1)(a) “focuses on the techniques” (subliminal, manipulative, deceptive). Article 5(1)(b) “focuses on the exploitation of specific vulnerable individuals or groups,” requiring vulnerabilities related to age, disability, or socio-economic situations. The Guidelines highlight that “manipulative or deceptive techniques that specifically target the vulnerabilities of persons due to age, disability, or socio-economic situation” may overlap but fall more directly under Article 5(1)(b) if aimed at those recognized vulnerable groups (section (125)). Out of Scope “(127) Distinguishing manipulation from persuasion is crucial to delineate the scope of the prohibition in Article 5(1)(a) AI Act, which does not apply to lawful persuasion practices.”(Section (127) of the Guidelines) Sections (128)–(133) detail “lawful persuasion,” standard advertising practices, and “medical treatment under certain conditions” that do not amount to harmful manipulation or exploitation. For Article 5(1)(b), section (134) clarifies that “exploitative AI applications that are not reasonably likely to cause significant harms are outside the scope, even if they use manipulative or exploitative elements.” SOCIAL SCORING (ARTICLE 5(1)(c)) Rationale and Objectives “(146) While AI-enabled scoring can bring benefits to steer good behaviour, improve safety, efficiency or quality of services, there are certain ‘social scoring’ practices that treat or harm people unfairly and amount to social control and surveillance. The prohibition in Article 5(1)(c) AI Act targets such unacceptable AI-enabled ‘social scoring’ practices that assess or classify individuals or groups based on their social behaviour or personal characteristics and lead to detrimental or unfavourable treatment, in particular where the data comes from multiple unrelated social contexts or the treatment is disproportionate to the gravity of the social behaviour. The ‘social scoring’ prohibition has a broad scope of application in both public and private contexts and is not limited to a specific sector or field.” (Section (146) of the Guidelines) According to section (147) of the Guidelines, social scoring systems often “lead to discriminatory and unfair outcomes for certain individuals and groups, including their exclusion from society, as well as social control and surveillance practices that are incompatible with Union values.” Main Concepts and Components of the ‘Social Scoring’ Prohibition “(149) Several cumulative conditions must be fulfilled for the prohibition in Article 5(1)(c) AI Act to apply: (i) The practice must constitute the ‘placing on the market’, the ‘putting into service’ or the ‘use’ of an AI system; (ii) The AI system must be intended or used for the evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics; (iii) The social score created with the assistance of the AI system must lead or be capable of leading to the detrimental or unfavourable treatment of persons or groups in one or more of the following scenarios: (a) in social contexts unrelated to those in which the data was originally generated or collected; and/or (b) treatment that is unjustified or disproportionate to the gravity of the social behaviour.” (Section (149) of the Guidelines) ‘Social Scoring’: Evaluation or Classification Over Time “(151) The second condition for the prohibition in Article 5(1)(c) AI Act to apply is that the AI system is intended or used for the evaluation or classification of natural persons or groups of persons and assigns them scores based on their social behaviour or their known, inferred or predicted personal and personality characteristics. The score produced by the system may take various forms, such as a mathematical number or ranking.” (Sections (151)–(152) of the Guidelines) Furthermore, section (155) clarifies that this must happen “over a certain period of time.” If data or behaviour from multiple contexts are aggregated without a clear, valid link to the legitimate purpose of the scoring, “the AI system is likely to fall under the prohibition.” Detrimental or Unfavourable Treatment in Unrelated Social Contexts or Disproportionate Treatment “(160) For the prohibition in Article 5(1)(c) AI Act to apply, the social score created by or with the assistance of an AI system must lead to a detrimental or unfavourable treatment for the evaluated person or group of persons in one or more of the following scenarios: (a) in social contexts unrelated to those in which the data was originally generated or collected; and/or (b) unjustified or disproportionate to the gravity of the social behaviour.” (Section (160) of the Guidelines) Section (164) further explains “detrimental or unfavourable treatment” can mean denial of services, blacklisting, withdrawal of benefits, or other negative outcomes. It also covers cases where the social score “leads to broader exclusion or indirect harm.” Out of Scope “(173) The prohibition in Article 5(1)(c) AI Act only applies to the scoring of natural persons or groups of persons, thus excluding in principle legal entities where the evaluation is not based on personal or personality characteristics or social behaviour of individuals. … If the AI system evaluates or classifies a group of natural persons with direct impact on those persons, the practice may still fall within Article 5(1)(c) if all other conditions are fulfilled.” (Section (173) of the Guidelines) Moreover, sections (175)–(176) clarify that lawful scoring practices for “specific legitimate evaluation purposes” , such as credit-scoring or fraud prevention, generally do not fall under the prohibition when done in compliance with Union and national law “ensuring that the detrimental or unfavourable treatment is justified and proportionate.” Interplay with Other Union Legal Acts “(178) Providers and deployers should carefully assess whether other applicable Union and national legislation applies to any particular AI scoring system used in their activities, in particular if there is more specific legislation that strictly regulates the types of data that can be used as relevant and necessary for specific evaluation purposes and if there are more specific rules and procedures to ensure justified and fair treatment.” (Section (178) of the Guidelines) Section (180) highlights that social scoring must also comply with EU data protection law, consumer protection rules, and “union non-discrimination law” where relevant. INDIVIDUAL CRIME RISK PREDICTION (ARTICLE 5(1)(d)) Rationale and Objectives “(184) Article 5(1)(d) AI Act prohibits AI systems assessing or predicting the risk of a natural person committing a criminal offence based solely on profiling or assessing personality traits and characteristics.” (Section (184) of the Guidelines) According to section (185), the provision “indicates, in its last phrase, that the prohibition does not apply if the AI system is used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to that activity.” As clarified in section (186), the intention is to ensure “natural persons should be judged on their actual behaviour and not on AI-predicted behaviour based solely on their profiling, personality traits or characteristics.” Main Concepts and Components of the Prohibition “(187) Several cumulative conditions must be fulfilled for the prohibition in Article 5(1)(d) AI Act to apply: (i) The practice must constitute the ‘placing on the market’, ‘the putting into service for this specific purpose’ or the ‘use’ of an AI system; (ii) The AI system must make risk assessments that assess or predict the risk of a natural person committing a criminal offence; (iii) The risk assessment or the prediction must be based solely on either, or both, of the following: (a) the profiling of a natural person, (b) assessing a natural person’s personality traits and characteristics.” (Section (187) of the Guidelines) Assessing or Predicting the Risk of a Person Committing a Crime “(189) Crime prediction AI systems identify patterns within historical data, associating indicators with the likelihood of a crime occurring, and then generate risk scores as predictive outputs. … However, such use of historical data may perpetuate or reinforce biases and may result in crucial individual circumstances being overlooked.” (Section (189) of the Guidelines) Section (191) notes that although “crime prediction AI systems bring opportunities … any forward-looking risk assessment or crime forecasting is caught by Article 5(1)(d) if it meets the other conditions, particularly if it is based solely on profiling or personality traits.” ‘Solely’ Based on Profiling or Personality Traits “(193) The third condition for the prohibition in Article 5(1)(d) AI Act to apply is that the risk assessment to assess or predict the risk of a natural person committing a crime must be based solely on (a) the profiling of the person, or (b) assessing their personality traits and characteristics.” (Section (193) of the Guidelines) As explained in section (200), “Where the system is based on additional, objective and verifiable facts directly linked to criminal activity, the prohibition does not apply (Article 5(1)(d) last phrase).” Out of Scope Exception for Supporting Human Assessment “(203) Article 5(1)(d) AI Act provides, in its last phrase, that the prohibition does not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity.” (Section (203) of the Guidelines) In section (205), the Guidelines recall the principle that “no adverse legal decision can be based solely on such AI output,” ensuring human oversight remains central. Location-Based or Geospatial Predictive Policing “(212) Location-based or geospatial predictive or place-based crime predictions … fall outside the scope of the prohibition, provided the AI system does not also profile an individual.” (Section (212) of the Guidelines) If the AI system eventually singles out specific natural persons as potential offenders “solely based on profiling or personality traits,” it can fall under Article 5(1)(d). Private Sector or Administrative Context “(210) Where a private entity profiles customers for its ordinary business operations and safety, with the aim of protecting its own private interests, the use of AI systems to assess criminal risks is not deemed to be covered by the prohibition of Article 5(1)(d) AI Act unless the private operator is entrusted by law enforcement or subject to specific legal obligations for anti-money laundering or terrorism financing.” (Section (210) of the Guidelines) Similarly, administrative offences (section (217)) do not fall within the prohibition if they are not classified as criminal under Union or national law. Interplay with Other Union Legal Acts “(219) The interplay of the prohibition in Article 5(1)(d) AI Act with the LED and GDPR is relevant when assessing the lawfulness of personal data processing … Article 11(3) LED prohibits profiling that results in direct or indirect discrimination.” (Section (219) of the Guidelines) Section (220) notes the connection to Directive (EU) 2016/343 on the presumption of innocence, emphasizing that “the AI Act must not undermine procedural safeguards or the fundamental right to a fair trial.” UNTARGETED SCRAPING OF FACIAL IMAGES (ARTICLE 5(1)(e)) Rationale and Objectives “(222) Article 5(1)(e) AI Act prohibits the placing on the market, putting into service for this specific purpose, or the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the Internet or CCTV footage.” (Section (222) of the Guidelines) According to section (223) of the Guidelines: “(223) The untargeted scraping of facial images from the internet and from CCTV footage seriously interferes with individuals’ rights to privacy and data protection and deny those individuals the right to remain anonymous. … Such scraping can evoke a feeling of mass surveillance and lead to gross violations of fundamental rights, including the right to privacy.” As clarified in section (224), this prohibition applies specifically to AI systems whose purpose is to “create or expand facial recognition databases” through the indiscriminate or “vacuum cleaner” approach of harvesting facial images. Main Concepts and Components of the Prohibition “(225) Several cumulative conditions must be fulfilled for the prohibition in Article 5(1)(e) AI Act to apply: (i) The practice must constitute the ‘placing on the market’, ‘the putting into service for this specific purpose’ or the ‘use’ of an AI system; (ii) for the purpose of creating or expanding facial recognition databases; (iii) the means to populate the database are through AI tools for untargeted scraping; and (iv) the sources of the images are either from the internet or CCTV footage.” (Section (225) of the Guidelines) Facial Recognition Databases “(226) The prohibition in Article 5(1)(e) AI Act covers AI systems used to create or expand facial recognition databases. ‘Database’ … is any collection of data or information specially organized for search and retrieval by a computer. A facial recognition database is capable of matching a human face from a digital image or video frame against a database of faces … .” (Section (226) of the Guidelines) Untargeted Scraping of Facial Images “(227) ‘Scraping’ typically refers to using web crawlers, bots, or other means to extract data or content from different sources, including CCTV, websites or social media, automatically. … ‘Untargeted’ means that the scraping operates without a specific focus on a given individual or group of individuals, effectively indiscriminately harvesting data or content.” (Section (227) of the Guidelines) “(230) If a scraping tool is instructed to collect images or video containing human faces only of specific individuals or a pre-defined group of persons, then the scraping becomes targeted … the scraping of the Internet or CCTV footage for the creation of a database step-by-step … should fall within the prohibition if the end-result is functionally the same as pursuing untargeted scraping from the outset.” (Section (230) of the Guidelines) From the Internet and CCTV Footage “(231) For the prohibition in Article 5(1)(e) AI Act to apply, the source of the facial images may either be the Internet or CCTV footage. Regarding the internet, the fact that a person has published facial images of themselves on a social media platform does not mean that that person has given his or her consent for those images to be included in a facial recognition database.” (Section (231) of the Guidelines) In section (232), the Guidelines exemplify real-life scenarios, including the use of automated crawlers to gather online photos containing human faces, or the use of software to systematically extract faces from CCTV feeds for a large database. Out of Scope “(234) The prohibition in Article 5(1)(e) AI Act does not apply to the untargeted scraping of biometric data other than facial images (such as voice samples). The prohibition does also not apply where no AI systems are involved in the scraping. Facial image databases that are not used for the recognition of persons are also out of scope, such as facial image databases used for AI model training or testing purposes, where the persons are not identified.” (Section (234) of the Guidelines) As clarified in sections (235)–(236), the mere fact of collecting large amounts of images for other legitimate purposes does not automatically trigger the ban, provided the system “is not intended for, nor used to create or expand a facial recognition database.” Interplay with Other Union Legal Acts “(238) In relation to Union data protection law, the untargeted scraping of the internet or CCTV material to build-up or expand face recognition databases, i.e. the processing of personal data (collection of data and use of databases) would be unlawful and no legal basis under the GDPR, EUDPR and the LED could be relied upon.” (Section (238) of the Guidelines) Section (238) further explains that the AI Act complements these data protection rules by banning such scraping at the level of placing on the market, putting into service, or use of the AI systems themselves. EMOTION RECOGNITION (ARTICLE 5(1)(f)) Rationale and Objectives “(239) Article 5(1)(f) AI Act prohibits AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the system is intended for medical or safety reasons.” (Section (239) of the Guidelines) According to section (240), the ban reflects concerns regarding the “intrusive nature of emotion recognition technology, the uncertainty over its scientific basis, and its potential to undermine privacy, dignity, and individual autonomy.” As stated in section (241) of the Guidelines, “(241) Emotion recognition can be used in multiple areas and domains … but it is also quickly evolving and comprehends different technologies, raising serious concerns about reliability, bias, and potential harm to human dignity and fundamental rights.” Main Concepts and Components of the Prohibition “(242) Several cumulative conditions must be fulfilled for the prohibition in Article 5(1)(f) AI Act to apply: (i) The practice must constitute the ‘placing on the market’, ‘the putting into service for this specific purpose’, or the ‘use’ of an AI system; (ii) AI system to infer emotions; (iii) in the area of the workplace or education and training institutions; and (iv) excluded from the prohibition are AI systems intended for medical or safety reasons.” (Section (242) of the Guidelines) AI Systems to Infer Emotions “(244) Inferring generally encompasses identifying as a prerequisite, so that the prohibition should be understood as including both AI systems identifying or inferring emotions or intentions … based on their biometric data.” (Section (244) of the Guidelines) Sections (246)–(247) confirm that “emotion recognition” means “identifying or inferring emotional states from biometric data such as facial expressions, voice, or behavioural signals.” Limitation to Workplace and Education “(253) The prohibition in Article 5(1)(f) AI Act is limited to emotion recognition systems in the ‘areas of workplace and educational institutions’. … This aims to address the power imbalance in those contexts.” (Section (253) of the Guidelines) According to section (254), “workplace” includes all settings where professional or self-employment activities occur (offices, factories, remote or mobile sites). As stated in section (255), “education institutions” include all levels of formal education, vocational training, and educational activities generally sanctioned by national authorities. Exception for Medical or Safety Reasons “(256) The prohibition in Article 5(1)(f) AI Act contains an explicit exception for emotion recognition systems used in the area of the workplace and education institutions for medical or safety reasons, such as systems for therapeutic use.” (Section (256) of the Guidelines) Section (258) clarifies the narrow scope of that exception, stating that it only covers use “strictly necessary” to achieve a medical or safety objective. Further, in section (261), the Guidelines note that “detecting a person’s fatigue or pain in contexts like preventing accidents is considered distinct from ‘inferring emotions’ and may be allowed.” More Favourable Member State Law “(264) Article 2(11) AI Act provides that the Union or Member States may keep or introduce ‘laws, regulations or administrative provisions which are more favourable to workers in terms of protecting their rights in respect of the use of AI systems by employers’.” (Section (264) of the Guidelines) Such stricter national laws or collective agreements could forbid emotion recognition entirely, even for medical or safety reasons in the workplace. Out of Scope “(266) Emotion recognition systems used in all other domains other than in the areas of the workplace and education institutions do not fall under the prohibition in Article 5(1)(f) AI Act. Such systems are, however, considered high-risk AI systems according to Annex III (1)(c).” (Section (266) of the Guidelines) Additionally, per section (265), uses that do not involve biometric data (e.g. text-based sentiment analysis) or do not seek to infer emotions are not caught by the prohibition. The Guidelines note these systems may still be subject to other AI Act requirements or other legislation if potential manipulative or exploitative effects arise. BIOMETRIC CATEGORISATION FOR SENSITIVE ATTRIBUTES (ARTICLE 5(1)(g)) Rationale and Objectives “(272) A wide variety of information, including ‘sensitive’ information, may be extracted, deduced or inferred from biometric information, even without the knowledge of the persons concerned, to categorise those persons. This may lead to unfair and discriminatory treatment … and amounts to social control and surveillance that are incompatible with Union values. The prohibition of ‘biometric categorisation’ in Article 5(1)(g) AI Act aims to protect these fundamental rights.” (Section (272) of the Guidelines) According to section (271), “Article 5(1)(g) AI Act prohibits biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex-life or sexual orientation.” The aim is to prevent “unfair, discriminatory and privacy-intrusive AI uses that rely on highly sensitive characteristics.” Main Concepts and Components of the Prohibition “(273) Several cumulative conditions must be fulfilled for the prohibition in Article 5(1)(g) AI Act to apply: (i) The practice must constitute the ‘placing on the market’, ‘the putting into service for this specific purpose’, or the ‘use’ of an AI system; (ii) The system must be a biometric categorisation system; (iii) individual persons must be categorised; (iv) based on their biometric data; (v) to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation.” (Section (273) of the Guidelines) Biometric Categorisation System “(276) ‘Biometric categorisation’ is typically the process of establishing whether the biometric data of an individual belongs to a group with some predefined characteristic. It is not about identifying an individual or verifying their identity, but about assigning an individual to a certain category.” (Section (276) of the Guidelines) As section (277) notes, this includes the automated assignment of individuals to categories such as “race or ethnicity,” “religious beliefs,” or “political stance,” purely on the basis of features derived from biometric data. Sensitive Characteristics: Race, Political Opinions, Religious Beliefs, etc. “(283) Article 5(1)(g) AI Act prohibits only biometric categorisation systems which have as their objective to deduce or infer a limited number of sensitive characteristics: race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation.” (Section (283) of the Guidelines) The Guidelines underscore (section (283)) that “the use of any ‘proxy’ or correlation-based approach that aims to deduce or infer these protected attributes from biometric data is likewise covered.” Out of Scope “(284) The prohibition in Article 5(1)(g) AI Act does not cover AI systems engaged in the labelling or filtering of lawfully acquired biometric datasets … if they do not entail the categorisation of actual persons to deduce or infer their sensitive attributes, but merely aim at ensuring balanced and representative data sets for training or testing.” (Section (284) of the Guidelines) Section (285) clarifies that labeling or filtering biometric data to reduce bias or ensure representativeness is specifically exempted: “(285) The labelling or filtering of biometric datasets may be done by biometric categorisation systems precisely to guarantee that the data equally represent all demographic groups, and not over-represent one specific group.” Thus, mere dataset management or quality-control uses of biometric categorisation remain lawful if they do not aim to classify real individuals by their sensitive traits. Interplay with Other Union Law “(287) AI systems intended to be used for biometric categorisation according to sensitive attributes or characteristics protected under Article 9(1) GDPR on the basis of biometric data, in so far as these are not prohibited under this Regulation, are classified as high-risk under the AI Act (Recital 54 and Annex III, point (1)(b) AI Act).” (Section (287) of the Guidelines) Section (289) notes that the AI Act’s ban under Article 5(1)(g) “further restricts the possibilities for a lawful personal data processing under Union data protection law, such as the GDPR … by excluding such practices at the earlier stage of placing on the market and use.” REAL-TIME REMOTE BIOMETRIC IDENTIFICATION (RBI) FOR LAW ENFORCEMENT (ARTICLE 5(1)(h)) Rationale and Objectives “(289) Article 5(1)(h) AI Act prohibits the use of real-time RBI systems in publicly accessible spaces for law enforcement purposes, subject to limited exceptions exhaustively set out in the AI Act.” (Section (289) of the Guidelines) According to section (293): “(293) Recital 32 AI Act acknowledges the intrusive nature of real-time RBI systems in publicly accessible spaces for law enforcement purposes … that can affect the private life of a large part of the population, evoke a feeling of constant surveillance, and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights.” The Guidelines (section (295)) note that, unlike other prohibitions in Article 5(1) AI Act, the ban here concerns “the use” of real-time RBI (rather than its placing on the market or putting into service). Main Concepts and Components of the Prohibition “(295) Several cumulative conditions must be fulfilled for the prohibition in Article 5(1)(h) AI Act to apply: (i) The AI system must be a RBI system; (ii) The activity consists of the ‘use’ of that system; (iii) in ‘real-time’; (iv) in publicly accessible spaces, and (v) for law enforcement purposes.” (Section (295) of the Guidelines) Remote Biometric Identification (RBI) “(298) According to Article 3(41) AI Act, a RBI system is ‘an AI system for the purpose of identifying natural persons, without their active involvement, typically at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database.’” (Section (298) of the Guidelines) Sections (299)–(303) clarify that “biometric identification” differs from verification (where a person’s identity claim is checked), focusing on “comparing captured biometric data with data in a reference database.” Real-time “(310) Real-time means that the system captures and further processes biometric data ‘instantaneously, near-instantaneously or in any event without any significant delay’.” (Section (310) of the Guidelines) Section (311) points out that “real-time” also covers a short buffer of processing, ensuring no circumvention by artificially adding minimal delays. Publicly Accessible Spaces “(313) Article 3(44) AI Act defines publicly accessible spaces as ‘any publicly or privately owned physical space accessible to an undetermined number of natural persons, regardless of whether certain conditions for access may apply, and regardless of the potential capacity restrictions.’” (Section (313) of the Guidelines) Sections (315)–(316) explain that “spaces such as stadiums, train stations, malls, or streets” are included, while purely private or restricted-access areas are excluded. For Law Enforcement Purposes “(320) Law enforcement is defined in Article 3(46) AI Act as the ‘activities carried out by law enforcement authorities or on their behalf for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including safeguarding against and preventing threats to public security.’” (Section (320) of the Guidelines) Exceptions to the Prohibition “(326) The AI Act provides three exceptions to the general prohibition on the use of real-time RBI in publicly accessible spaces for law enforcement purposes. Article 5(1)(h)(i) to (iii) AI Act exhaustively lists three objectives for which real-time RBI may be authorised … subject to strict conditions.” (Section (326) of the Guidelines) Those objectives, detailed in sections (329)–(356), are: Targeted search for victims of abduction, trafficking, or sexual exploitation, or missing persons. Prevention of a specific, substantial, and imminent threat to life or safety, or a genuine and present or foreseeable threat of a terrorist attack. Localisation or identification of suspects of the serious crimes listed in Annex II AI Act, punishable by at least four years of imprisonment. As clarified in section (360), “any such use must be proportionate, strictly necessary, and limited in time, geography, and the specific targeted individual.” Authorisation, Safeguards, and Conditions (Article 5(2)–(7)) “(379) Article 5(3) AI Act requires prior authorisation of each individual use of a real-time RBI system and prohibits automated decision-making based solely on its output … The deployer must also conduct a Fundamental Rights Impact Assessment (FRIA) in accordance with Article 27 AI Act.” (Section (379) of the Guidelines) Section (381) underscores that the request for authorisation must show “objective evidence or clear indications” of necessity and proportionality, and that “no less intrusive measure is equally effective” for achieving the legitimate objective. Out of Scope “(426) All other uses of RBI systems that are not covered by the prohibition of Article 5(1)(h) AI Act fall within the category of high-risk AI systems … provided they fall within the scope of the AI Act.” (Section (426) of the Guidelines) Sections (427)–(428) note that “retrospective (post) RBI systems” do not fall under the real-time ban but are still classified as high-risk and subject to additional obligations (Article 26(10) AI Act). Private sector uses in non-law enforcement contexts (e.g., stadium access control) likewise do not trigger this specific prohibition, though they must still comply with other AI Act requirements and Union data protection law. Link: Commission publishes the Guidelines on prohibited artificial intelligence (AI) practices, as defined by the AI Act . * * * Prokopiev Law Group stands ready to meet your AI and web3 compliance needs worldwide—whether you are exploring AI Act compliance, crypto licensing, web3 regulatory frameworks, NFT regulation, or DeFi and AML/KYC requirements. Our broad network spans the EU, US, UK, Switzerland, Singapore, Malta, Hong Kong, Australia, and Dubai, ensuring every local standard is met promptly and precisely. Write to us now for further details and let our proven legal strategies keep your projects fully compliant. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- Compliance Challenges in DeFi: AML/KYC & Securities Law Complexities
Decentralized Finance (DeFi) promises financial services without traditional intermediaries – but this very decentralization creates thorny compliance challenges. Regulators worldwide are grappling with how anti-money laundering (AML), know-your-customer (KYC) rules and securities laws apply in a permissionless ecosystem. Regulatory Landscape Across Jurisdictions European Union: Framework with Decentralization Carve-Outs The European Union has moved toward comprehensive crypto regulation, though it distinguishes truly decentralized arrangements from those with intermediaries. Two pillars of EU policy affect DeFi compliance: the new Markets in Crypto-Assets (MiCA) regulation for crypto markets and existing/upcoming AML directives and regulations for financial crime prevention. MiCA’s Scope and Application Coverage of Centralized Crypto Services The EU’s Markets in Crypto-Assets Regulation (MiCA) establishes a regulatory framework for crypto-asset issuers and service providers. It applies to any natural or legal person (or similar undertaking) engaged in crypto-asset activities – for example, operating trading platforms, facilitating exchanges, custody services, and so forth. In effect, centralized crypto services (such as exchanges, brokers, custodians, and other intermediaries) fall squarely under MiCA. These Crypto-Asset Service Providers (CASPs) must obtain authorization and comply with operational and prudential requirements, similar to traditional financial institutions. MiCA enumerates various types of regulated crypto-asset services, including: custody and administration of crypto-assets for clients, operating a crypto-asset trading platform, exchanging crypto-assets for fiat or other crypto, execution of client orders, placing of crypto-assets, providing advice on crypto-assets, and related functions. Any intermediary performing these activities in the EU is in scope and will need a MiCA license (with associated governance, capital, and consumer-protection obligations). Stablecoin Issuers MiCA devotes special provisions to stablecoins, referred to as Asset-Referenced Tokens (ARTs) (stablecoins referencing multiple assets or non-fiat values) and E-Money Tokens (EMTs) (stablecoins referencing a single fiat currency). Issuers of such tokens must be legally incorporated in the EU and obtain authorization from a national regulator to issue them. Key obligations for stablecoin issuers include maintaining sufficient reserve assets, publishing a detailed white paper, offering redemption rights at par for holders, and adhering to prudential safeguards to protect monetary stability. Significant stablecoins (with large user bases or transaction volumes) face even tighter supervision, potentially including limits on daily transaction volume. Crypto Trading Platforms MiCA explicitly covers crypto trading venues. Any entity operating a platform that brings together buyers and sellers of crypto-assets (whether for crypto-to-crypto or crypto-to-fiat trades) is considered a CASP and must be authorized. Such platforms must meet ongoing compliance duties: maintaining minimum capital, ensuring managers are fit and proper, implementing cybersecurity controls, segregating client assets, and providing transparent operations. Carve-Out for Decentralized Services (Recital 22) While MiCA casts a wide net over centralized actors, it pointedly exempts fully decentralized activities. Recital 22 clarifies that if crypto-asset services are provided in a “fully decentralised manner” without any intermediary, they should not fall within the scope of the regulation. Thus, if a service truly runs autonomously on a decentralized network with no controlling party, EU lawmakers did not intend to capture it. However, this exemption is noted in a recital rather than detailed operative articles, leaving room for interpretation. Regulators emphasize that partial or hybrid decentralization (where an identifiable party retains some control) likely does not qualify for the carve-out. In essence, MiCA’s coverage extends to any service where a person or entity performs an intermediary function. DeFi Projects with Some Central Control A key question is how MiCA classifies DeFi projects that are not fully decentralized. MiCA’s wording suggests that any form of central “intermediation” brings a project within scope. If a DeFi arrangement involves a team operating a front-end, collecting protocol fees, or otherwise controlling upgrades, that team could be deemed a CASP. Officials have signaled that true decentralization must mean the absence of a controlling entity. If a DeFi project is partially decentralized (“HyFi,” or hybrid finance), MiCA will likely apply. Many current DeFi protocols have facets of centralization—admin keys, core dev teams, or small governance groups—which, from a regulatory standpoint, may trigger full compliance obligations under MiCA. Impact of MiCA on DeFi Defining “Fully Decentralized” – EU Perspective MiCA does not define “fully decentralized,” leaving a significant gray area. European regulators generally consider whether a project has no entity exercising control, no governance token concentration, no fee collection by a specific party, and no centralized front-end with gatekeeping powers. Only if every aspect is automated and dispersed, with no single group in charge, would it likely be exempt. Because that threshold is high, most DeFi projects risk classification as CASPs if they retain any managerial or economic control. Which DeFi Aspects Might Still Fall Under MiCA Even if the protocol itself is autonomous, various aspects can bring it under MiCA: Governance Token Issuance: A team offering governance tokens to EU users may need to comply with token issuance rules under MiCA, including drafting a compliant white paper. Liquidity Pools & Protocol Operations: If a DeFi developer or entity retains an admin key or collects fees, regulators could treat them as a crypto-asset service provider. Treasury Management: Fees that accrue to a foundation or multisig group could be viewed as service revenue, suggesting there is an identifiable operator or beneficiary. Protocol Governance: If governance token holders can upgrade or change the protocol, the system may not be fully autonomous, exposing key holders to regulatory obligations. Obligation for DeFi Protocols to Comply If a project maintains operations in the EU but does not meet the “fully decentralized” standard, it may be forced to obtain a CASP license under MiCA. That entails duties akin to those imposed on centralized exchanges, such as governance, disclosures, capital, and consumer protection. Projects may evolve into hybrid models—where the underlying smart contract is open, but the front-end or development team is regulated. Others may prefer to geofence EU users or further decentralize operations to avoid regulation. EU AMLD6, DORA, and AML Regulations Application of AMLD6 to Crypto Services: The EU’s Sixth Anti-Money Laundering Directive (AMLD6), part of a broader AML legislative package, expands the scope of “obliged entities” to include crypto-asset service providers (CASPs) across the EU. This means centralized crypto businesses (exchanges, custodial wallet providers, brokers, etc.) are explicitly subject to AML/CFT requirements akin to those for traditional financial institutions. Under AMLD6 and the accompanying EU AML Regulation, CASPs must implement customer due diligence, monitor transactions, keep records, and report suspicious activity. Newly adopted EU rules also require CASPs to collect and store information on the source and beneficiary for each crypto transaction, effectively implementing the “travel rule.” Application of DORA to Crypto Businesses: The Digital Operational Resilience Act (DORA) is a separate EU regulation focusing on cybersecurity and operational continuity for financial entities, including CASPs authorized under MiCA. As of January 2025, centralized crypto businesses must have robust security controls, incident reporting mechanisms, business continuity plans, and undergo operational resilience testing. In essence, bank-grade resilience standards will apply to crypto firms, aiming to reduce hacks and service outages in the digital asset space. DeFi Projects as VASPs – Classification Challenges: Under global standards (FATF) and EU definitions, simply labeling a platform as “DeFi” does not exempt it from regulation. If individuals or entities exercise control or significant influence over a DeFi arrangement, they may be treated as virtual asset service providers (VASPs) with AML obligations. A DeFi project featuring an identifiable company collecting fees or operating a front-end will likely be deemed a crypto-asset service provider. On the other hand, a fully decentralized protocol with no controlling party is in a gray area; however, authorities are inclined to apply a “substance over form” test, meaning a DeFi platform with centralized elements can be compelled to comply with AML/KYC requirements. Obligations for DeFi Lending, DEXs, and Custodial Services: If a DeFi platform is deemed an obliged entity, it faces obligations similar to centralized providers. For lending protocols, this could mean enforcing KYC on users supplying or borrowing assets if there is an identifiable operator. Decentralized exchanges (DEXs) that match trades or collect fees may be treated as VASPs, thus required to identify users and report suspicious transactions. Custodial services are straightforwardly in scope—anyone holding crypto on behalf of others has been subject to AML laws since AMLD5 and continues to be under AMLD6. Even non-custodial DeFi projects that interact with EU customers could face indirect obligations if there is an identifiable entity offering the service. Implementation Status & Timeline of Key EU Measures AMLD6 (Sixth Anti-Money Laundering Directive) AMLD6 was adopted as part of the 2024 AML package and entered into force in July 2024. As a Directive, it must be transposed into national law by EU Member States, with a final deadline of mid-2027 for full implementation. While many AML obligations already apply to crypto service providers under prior directives, AMLD6 introduces more stringent mechanisms and penalties. EU AML Regulation (AMLR) – the “Single Rulebook” Alongside AMLD6, the EU is enacting an AML/CFT Regulation to unify rules across Member States. This direct-applicability regulation also takes effect by mid-2027. It includes detailed requirements for customer due diligence, suspicious activity reporting, and broadens the definition of obliged entities to include crypto. Regulatory Technical Standards and guidance from the new EU AML Authority (AMLA) will shape practical implementation, with significant milestones from 2025 to 2026. EU Anti-Money Laundering Authority (AMLA) The AML package creates a new supra-national regulator. AMLA will supervise high-risk or cross-border financial institutions, including major crypto providers, from 2026–2027 onward. Until then, national regulators handle primary enforcement. AMLA will issue technical standards and guidelines, resulting in more centralized oversight of AML in crypto. Digital Operational Resilience Act (DORA) DORA was published in late 2022 and becomes fully applicable in January 2025. It imposes robust ICT risk management, incident reporting, and business continuity requirements on all in-scope financial entities, including CASPs. By late 2024, crypto firms should finalize compliance measures, such as incident response protocols and third-party risk assessments, in preparation for enforcement beginning in January 2025. Transfer of Funds Regulation (Crypto Travel Rule) Adopted in 2023, the recast Transfer of Funds Regulation applies the travel rule to crypto-asset transfers, starting from December 2024. Any CASP transferring crypto must include originator and beneficiary details with the transaction, similar to wire transfers in traditional finance. This rule also covers interactions with unhosted wallets, requiring CASPs to collect and verify identifying information on transfers above certain thresholds. Firms must refuse or halt transfers lacking necessary data, making travel rule compliance a major priority for crypto service providers. Pending Proposals and 2024+ Outlook With AMLD6, AMLR, AMLA, TFR, DORA, and MiCA all in play, the EU’s regulatory framework for crypto is rapidly harmonizing. By 2027, AMLD6 and AMLR will fully bind all Member States, cementing Europe’s single AML rulebook. Comparative Analysis EU vs. U.S. The EU is moving toward a unified AML framework enforced by AMLA, whereas the U.S. relies on FinCEN regulations (BSA) and multiple enforcement agencies. While both treat crypto exchanges as obliged entities, the U.S. often communicates compliance expectations via enforcement actions, and has taken high-profile measures against mixers and exchanges. The EU’s single rulebook aims for more preventive supervision, though it can also impose large fines and coordinate criminal prosecutions via Member State authorities. EU vs. UK After Brexit, the UK has its own AML regime under the Money Laundering Regulations, requiring crypto exchanges and custodians to register with the FCA. The UK often requires the travel rule to apply with no de minimis threshold, making it stricter than the EU for smaller transfers. The UK’s approach is less centralized than the EU’s future AMLA model, with existing agencies overseeing compliance. EU vs. Singapore Singapore has a strong licensing regime under the Payment Services Act, requiring AML compliance for digital payment token providers. Like the EU, Singapore has adopted the FATF travel rule, imposing a ~$1,000 threshold for enhanced due diligence on unhosted wallets. Both jurisdictions share a risk-based approach but proactively supervise crypto businesses. EU vs. Switzerland Switzerland integrates crypto into its existing AML laws, demanding that exchanges and wallet providers comply with due diligence, often imposing a strict ~$1,000 threshold for unhosted wallet verification. Enforcement is overseen by FINMA, which has sanctioned firms for AML breaches. Regulatory Obligations & Risks for DeFi Projects Smart Contract-Based Services Under AML Rules: If a DeFi service qualifies as exchange, transfer, or custody under EU law, and there is an entity or persons behind it, AMLD6 and related regulations can apply. Authorities will hold operators or developers accountable if they exercise control, even if transactions occur via automated smart contracts. The law targets the persons or entities benefiting from or running the platform, rather than the code itself. Enforcement on Decentralized Protocols: Truly decentralized protocols pose challenges for regulators, but enforcement can focus on: Individuals/Entities : Developers, founders, or DAOs who maintain or profit from the system. On/Off-Ramps : EU-regulated exchanges can reject or scrutinize funds from non-compliant DeFi sources. Technical Measures : Authorities may require front-ends to implement geolocation blocks, address screening, or adopt zero-knowledge-based KYC solutions. Legal Risks for DeFi Founders and DAOs: Those found knowingly facilitating money laundering or ignoring AML obligations could face fines or criminal liability. AMLD6 broadens the definition of offenses and strengthens information sharing among Member States, boosting cross-border investigations. DAOs might be treated as unincorporated associations whose active participants can be liable. Will DeFi Have to Implement KYC/AML? If DeFi wants mainstream adoption and integration with regulated finance, it may need optional or mandatory compliance layers—e.g., whitelisted pools or KYC gates. Over time, regulatory pressure from authorities and off-ramps will likely push more DeFi protocols to adopt or at least accommodate AML measures. Interaction with the Travel Rule & Unhosted Wallet Rules EU Travel Rule Extension to Crypto: Starting December 2024, all crypto transfers involving a CASP must include identifying information on the originator and beneficiary, mirroring traditional wire transfer rules. CASPs must refuse or halt transfers lacking complete data. No de minimis threshold exists—any transfer, regardless of amount, requires this information. Treatment of Unhosted Wallets: Unhosted wallets (self-custody) complicate the travel rule because there is no second institution to receive data. EU CASPs must still record and verify user information for unhosted wallets above certain thresholds. In practice, this may mean proving ownership of the receiving address. Smart contract addresses used in DeFi (e.g., liquidity pools, DAOs) also count as “unhosted,” prompting some CASPs to demand enhanced due diligence or refuse direct transfers. Impact on DeFi Liquidity Providers and DAOs: Liquidity providers withdrawing from an EU exchange to a DeFi pool might need to route funds to their own verified wallet first or prove ownership if exceeding €1,000. Deposits from privacy-enhancing protocols can be flagged as high risk. DAOs operating treasuries could face challenges off-ramping to fiat if no single verified individual claims the wallet. Exchanges, under pressure to comply, may reject or closely scrutinize funds from unknown DeFi addresses. Privacy and GDPR Considerations: An interesting twist in the EU is the General Data Protection Regulation (GDPR), which imposes rules on handling personal data. KYC data (names, IDs, etc.) is obviously personal data that must be stored securely and minimized. A conflict arises if one tried to record compliance information on an immutable blockchain. Once on-chain, data cannot be erased, clashing with GDPR’s “right to be forgotten.” Most DeFi projects avoid putting any personal data on-chain (favoring off-chain or zero-knowledge proofs), but as regulators push for on-chain identity attestation or allow verified credentials, projects must design systems that reconcile transparency with privacy law. Moreover, any DeFi company handling EU resident data needs GDPR compliance (privacy notices, breach protocols, etc.), adding another layer of regulatory complexity beyond financial laws. Asia Singapore Singapore has sought to be a crypto-friendly hub while enforcing strict AML standards. Under the Payment Services Act 2019 (PSA), any business providing digital payment token services (e.g. crypto trading, transfer, or custody services) must be licensed by the Monetary Authority of Singapore (MAS). This licensing comes with AML/CFT requirements – Singapore mandates full KYC for regulated crypto services, transaction monitoring, and compliance with FATF travel rule requirements. DeFi protocols per se are not explicitly carved out under the PSA; however, MAS has generally taken a “same activity, same regulation” stance. If a Singapore-based team runs a crypto lending or trading platform (even if using DeFi tech), regulators would likely view it as a financial service that needs either a license or an exemption via a sandbox. In practice, Singapore has encouraged experimentation through initiatives like Project Guardian, where regulated financial institutions explore DeFi for tokenized assets in a controlled environment. MAS officials have acknowledged the reality of DeFi and discussed potentially new frameworks – for instance, MAS’s chief fintech officer has suggested that entirely avoiding identification in DeFi is “not realistic” in the long run. Startups in Singapore thus often create two layers: an open-source protocol (which MAS might not regulate directly if truly decentralized) and a front-end company that interfaces with users (which would need a license and KYC). Notably, Singapore has also restricted marketing of crypto to the public and discouraged risky retail speculation, which means DeFi projects targeting Singaporean users should be cautious in how they advertise and ensure they are not offering prohibited products (like derivatives) without proper authorization. Hong Kong Hong Kong pivoted to embrace crypto under a new regulatory regime, allowing retail trading of approved cryptocurrencies on licensed exchanges as of 2023. The Securities and Futures Commission (SFC) in Hong Kong has made it clear that DeFi projects are not above the law. If a DeFi activity falls under existing definitions of regulated activity – e.g., operating an exchange, offering securities, or managing assets – it will require the appropriate SFC license. Providing automated trading services, even in a decentralized platform, triggers licensing (Type 7 ATS license) if the assets traded are “securities” or futures by Hong Kong’s definition. Likewise, offering what amounts to a collective investment scheme (like a yield farming pool inviting Hong Kong public investment) would require authorization. Hong Kong regulators see DeFi through the lens of risk: concerns include financial stability, lack of transparency, market manipulation (e.g. oracle attacks, front-running) and investor protection. They have indicated that operators and developers can be held accountable if they are in Hong Kong or target Hong Kong investors. Thus a DeFi startup in Hong Kong might need to either geo-fence Hong Kong users or ensure full compliance (including KYC and investor eligibility checks) for any product that might be deemed a security or trading facility. Japan Japan was one of the first major jurisdictions with a clear regulatory regime for cryptocurrency, and it continues to enforce one of the strictest AML/KYC standards. All crypto exchanges in Japan must register with the Financial Services Agency (FSA) and implement KYC for all customers. While Japan has not issued specific DeFi regulations, any service that custodies assets or intermediates trades would likely fall under existing laws (the Payment Services Act for crypto exchange or funds transfer, and the Financial Instruments and Exchange Act if it involves securities/derivatives). For example, a DeFi protocol enabling margin trading or synthetic stocks would likely be seen as offering derivatives to Japanese residents, which is unlawful without a license. Japan implements the FATF Travel Rule through the Japan Virtual Currency Exchange Association – meaning exchanges must collect counterparty information for transactions. If DeFi usage makes it hard to trace such information, Japanese regulators may respond by limiting exchange interactions with DeFi platforms that don’t meet compliance standards. Culturally, Japan emphasizes consumer protection (they infamously have a whitelist for tokens that exchanges can list). A completely permissionless DeFi application sits uneasily with that ethos. Thus, while one won’t find an “FSA DeFi rulebook,” a Japan-based founder should assume that if their protocol becomes popular in Japan, authorities might demand a compliance interface or even pressure to block Japanese IPs if the product can’t be monitored for AML. Other financial centers in Asia are also shaping DeFi oversight. South Korea treats crypto exchanges strictly (real-name verified accounts only, strict AML). After incidents like the Terra-Luna collapse, Korean regulators grew even more vigilant about crypto schemes. A DeFi project involving Korean users could be seen as an unregistered securities offering (if promising yields) or simply an illegal investment program if not approved. China remains effectively closed to cryptocurrency trading (outright ban), focusing on its central bank digital currency and permissioned blockchain tech. India has taken a tough stance with heavy taxation on crypto transactions, sometimes discussing an outright ban – a hostile environment for DeFi compliance. Meanwhile, Thailand and Malaysia have licensing for digital asset businesses that might ensnare certain DeFi activities. Overall, Asia presents a mix of innovation sandboxes and strict rules; the common theme is that if a DeFi project has an identifiable presence or target market in a jurisdiction, local regulators will apply existing financial laws. Other Jurisdictions Switzerland: Long seen as a crypto-friendly jurisdiction, Switzerland (through regulator FINMA) applies a technologically neutral approach to DeFi. FINMA has explicitly stated that it applies existing rules to DeFi applications under the principles of technology neutrality and “same risks, same rules,” looking past form to substance. If a DeFi application in Switzerland offers a service equivalent to banking, trading, or asset management, FINMA will require the appropriate license just as it would for a traditional provider. For example, running a decentralized exchange in Switzerland could trigger the need to be an authorized securities dealer or exchange if there is a central coordinating entity. Switzerland is notable for its AML rules regarding crypto: FINMA regulations (as of 2021) lowered the threshold for anonymous crypto transactions to CHF 1000, meaning Swiss VASPs must KYC customers even for relatively small amounts. This was done to close a loophole and prevent structuring of transactions to avoid AML checks. In a DeFi context, while Swiss law can’t force an on-chain DEX to conduct KYC, any Swiss-regulated intermediary (like a crypto bank or broker) interacting with DeFi liquidity must ensure no anonymous large transfers occur. Swiss authorities have also pioneered solutions like OpenVASP (a protocol for Travel Rule data exchange) to facilitate compliance even in decentralized transfers. Moreover, many DeFi projects have used the Swiss nonprofit foundation model to launch (to issue tokens under guidance from FINMA’s ICO framework). While this can be effective for token classification (utility vs asset tokens), the foundation must still implement AML controls if it engages in any custodial or exchange-like activities. United Arab Emirates: The UAE, particularly Dubai and Abu Dhabi, has set up regulatory regimes to attract crypto businesses. Dubai’s new Virtual Assets Regulatory Authority (VARA) issues licenses for various crypto activities in the emirate (and some free zones), with an emphasis on meeting FATF standards – meaning KYC/AML programs are mandatory for licensees. Abu Dhabi’s financial free zone (ADGM) has a framework treating crypto exchanges and custodians on par with financial institutions, requiring customer due diligence and monitoring. Even as the UAE markets itself as a crypto hub, it demands compliance measures from those who set up shop. A DeFi exchange or yield platform based in Dubai would need to register with VARA under the appropriate category and implement KYC for users, transaction monitoring, and sanctions screening. The UAE is interesting because it explicitly allows what some other places don’t (like crypto token fundraising) but under oversight. DeFi founders often incorporate entities in the UAE to benefit from clear rules and 0% tax, but they should expect close interaction with regulators and ongoing audits to ensure no illicit finance is flowing. On the flip side, purely decentralized operations with no UAE entity fall outside these regimes – but then cannot easily use the UAE’s traditional banking or legal system. Latin America: In Latin America, regulation ranges from nascent to non-existent, though the trend is toward more oversight. Brazil passed a law (effective 2023) requiring crypto service providers to register with the central bank and comply with AML/CFT measures. Mexico’s Fintech Law and subsequent rules require exchanges to register with the central bank and impose KYC – again focusing on centralized players. Many Latin American countries are still developing regulatory approaches; in places with capital controls or inflation, DeFi usage is high as an alternative, which raises political and AML concerns. Authorities might see DeFi as a channel to bypass currency rules or launder narcotics money, increasing the likelihood of future clampdowns. Enforcement can be uneven, but as global standards trickle down, regulators in the region are expected to tighten controls on DeFi. El Salvador adopted Bitcoin as legal tender and has been encouraging crypto businesses – but it also must follow FATF standards, meaning AML obligations still apply. Global Standards (FATF): Overarching all these jurisdictions is the influence of the Financial Action Task Force (FATF), which sets AML/CFT standards followed by 200+ countries. FATF extended its standards to “virtual asset service providers” in 2019, which countries are implementing in various ways. FATF has explicitly highlighted DeFi as a potential gap: in a 2023 update, it noted that conducting comprehensive DeFi risk assessments is challenging for most jurisdictions due to data and enforcement difficulties. FATF recommends that if a DeFi arrangement has “owners or operators,” countries should hold those parties accountable as VASPs even if the system brands itself decentralized. Conversely, if truly no person exercises control, some activity might fall outside conventional regulation. The Travel Rule applies to crypto transfers over a threshold, meaning as countries enforce this, DeFi protocols that interface with regulated entities will feel indirect pressure to facilitate the required information sharing or risk being geofenced. Decentralization vs. Regulatory Requirements The core paradox is that DeFi is designed to eliminate centralized control, yet laws are enforced by finding someone – a person or entity – to hold responsible. Traditional compliance frameworks assume a regulated entity (a bank, exchange, broker) can perform KYC checks, maintain records, and be examined or sanctioned for failures. DeFi breaks this model by enabling peer-to-peer interactions governed by smart contracts. If no one controls a protocol, who is responsible for ensuring compliance? Regulators are increasingly taking the view that most DeFi projects aren’t as decentralized as they claim. Truly decentralized projects present a dilemma: regulators either have to regulate the users themselves or impose rules at the periphery (e.g., on interfaces or on-off ramps). This conflict can put founders in an untenable position. If they fully decentralize (renounce control, launch code and step away), they might avoid being a regulated entity – but then they also relinquish the ability to adapt the protocol to comply with future rules. If they retain control to enforce compliance (like adding KYC gating), they defeat the core premise of open, permissionless access. Walking this tightrope is perhaps the fundamental challenge of DeFi compliance. Enforcing AML/KYC in Permissionless Systems AML and KYC rules require identifying customers, monitoring transactions, and reporting suspicious activity – tasks that assume a gatekeeper is present. In DeFi protocols, users connect wallets and transact with no onboarding process collecting names or IDs. Smart contracts do not discern good versus bad actors; anyone with a wallet and assets can participate. This permissionless design is superb for accessibility and innovation, but it’s a nightmare for AML enforcement. Authorities worry that criminals, sanctioned nations, or terrorists can freely move funds through DeFi protocols to obscure their origin. Enforcing KYC in this environment has proven difficult. A DeFi platform can’t easily compel every user globally to upload an ID – there is no customer account creation step in most DApps. Some projects have tried to implement opt-in KYC or whitelisting: for instance, creating “permissioned pools” where only verified addresses can participate. Others have introduced blocklists on front-ends, preventing known illicit addresses from interacting through the official website. Regulators increasingly rely on ex post enforcement and chain analytics to trace illicit funds and identify suspects. DeFi founders can mitigate risk by integrating blockchain monitoring tools that flag suspicious flows, cooperating with law enforcement when required, and avoiding explicit facilitation of money laundering. Smart Contracts, Anonymity, and Illicit Finance DeFi’s infrastructure offers both transparency (every transaction is public) and anonymity (user addresses are pseudonymous). This combination creates opportunity and risk. It enables open financial innovation but also invites exploitation by criminals. Bad actors can chain-hop across multiple DeFi protocols, use privacy mixers, and rapidly launder stolen funds. Sanctions evasion becomes easier if no on-ramp checks identity. Fraud and market manipulation (rug pulls, pump-and-dumps, oracle exploits) are common in a space lacking centralized oversight. DeFi founders thus face mounting pressure to proactively mitigate illicit finance risks. Without a compliance framework, entire protocols can be blacklisted or sanctioned, as the Tornado Cash saga illustrated. Self-regulation through code audits, risk monitoring, KYC gating, or blocklists may become the norm if DeFi wants to operate within the confines of global law. Projects that remain staunchly permissionless risk isolation from regulated finance, as banks and centralized exchanges refuse to interact with them due to compliance concerns. Securities Regulations in DeFi If a token is deemed a security in a certain jurisdiction (like the U.S.), offering it or facilitating trades could require registration, disclosures, and licenses. DeFi blurs lines because tokens serve multiple roles (governance, utility, investment instrument). Regulators have grown skeptical of superficial decentralization arguments, emphasizing that governance tokens conferring profits or yields are likely securities. Yield farming and liquidity mining may likewise be treated as investment contracts if participants expect profit from a team’s efforts. Some DeFi projects try to exclude U.S. IPs, use disclaimers, or adopt progressive decentralization to reduce securities risk. Yet enforcement actions indicate that disclaimers alone won’t suffice. Projects must carefully structure tokenomics and marketing to avoid crossing into regulated territory. Extraterritorial Impact of Major Regulations A daunting aspect for global DeFi startups is that the laws of the U.S. (and, to a lesser extent, the EU) can reach far beyond their borders. U.S. regulators have not hesitated to prosecute foreign projects that serve American users, claiming jurisdiction whenever U.S. investors or markets are involved. Likewise, the EU’s regulations can affect anyone offering services to EU citizens. This extraterritorial reach means that even if a DeFi project is based in a crypto-friendly jurisdiction, it could still face scrutiny from major regulators. The risk of enforcement may lead projects to geo-block certain regions, incorporate offshore, or become genuinely decentralized so no entity can be targeted. Nonetheless, as DeFi grows, regulators are forging cross-border coalitions to prevent regulatory arbitrage. Founders cannot ignore the biggest markets if they want mainstream adoption. Jurisdictional choices thus require careful planning, balancing legal exposure with business strategy. Guidance for DeFi Founders & Startups Balance Decentralization with Compliance from Day One Decide early which aspects of your project will be decentralized vs. controlled, and plan compliance measures accordingly. Incorporate a legal entity for the interface or development company if you anticipate regulatory scrutiny. Document a roadmap to progressive decentralization, but maintain compliance for any centralized functions until fully decentralized. Implement “Smart” AML/KYC Measures That Align with DeFi Ethos Use tiered access or feature gating: allow basic permissionless usage for small amounts but require verification for large-volume activities. Leverage decentralized identity solutions or zero-knowledge proofs to balance privacy with compliance. Integrate real-time risk monitoring tools (e.g., blockchain analytics) to flag suspicious addresses. Maintain off-chain documentation or audit trails to show good faith if investigated. Navigate Securities Law Proactively Avoid marketing tokens with explicit profit-sharing or investment-like features. Focus on utility and governance. Use regulatory compliant fundraising (e.g., Reg D or Reg S offerings) if you sell tokens to raise capital. Conduct ongoing legal reviews as token features evolve, documenting how you minimize securities risk. Engage Regulators, Auditors, and Advisors Early Join regulatory sandboxes or innovation hubs where possible to get feedback on compliant DeFi models. Undergo third-party audits (technical security and legal compliance) and keep the reports to show regulators. Stay in dialogue with compliance experts who can update you on shifting regulations. Smart Jurisdiction Choices and Legal Arbitrage Incorporate in crypto-friendly jurisdictions (e.g., Switzerland, Singapore, UAE) that offer clear frameworks. Create separate entities for different functions (protocol foundation vs. operating company) to compartmentalize risk. Remain flexible: regulations can shift, so be ready to relocate or restructure if your chosen haven becomes hostile. The regulatory environment is moving fast, and every crypto or DeFi project deserves a clear strategy for staying ahead. Whether you’re determining if your platform qualifies for carve-outs or planning a compliant token sale, the right legal guidance can make all the difference. At Prokopiev Law , we blend practical crypto experience with deep legal insight to help you: Pinpoint where your project stands under applicable laws—and whether you can leverage its DeFi exemptions Structure your operations, from entity setup to licensing routes, to safeguard your vision Create and review essential documentation for token offerings, stablecoin issuance, and more Stay informed and compliant as the EU’s regulatory framework continues to evolve Your innovation deserves the strongest legal foundation. Reach out to Prokopiev Law today to learn how we can protect your ambitions and pave the way for long-term success. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- The EU AI Act: Overview and Key Legal Insights
The European Union's AI Act (Regulation (EU) 2024/1689) introduces a legal framework to regulate artificial intelligence systems across Europe. The AI Act establishes harmonized rules for developing, deploying, and using AI systems to ensure that these technologies are safe, transparent, and respectful of fundamental rights. The regulation takes a risk-based approach, classifying AI systems into four categories: unacceptable risk (prohibited systems), high-risk, limited-risk, and minimal-risk systems. Each classification comes with its own set of obligations, with the most stringent requirements applied to high-risk systems that can significantly affect people's lives, such as those used in critical infrastructure, law enforcement, or education. Implementation Timeline 2024 12 July 2024 : The AI Act is officially published in the Journal of the European Union. 1 August 2024 : The AI Act enters into force, but its requirements won't apply immediately—they will gradually be phased in over time. 2 November 2024 : Member States are required to identify and publicly list the authorities responsible for fundamental rights protection. 2025 2 February 2025 : This is the date when prohibitions on certain AI systems begin to apply, as outlined in Chapters 1 and 2 of the Act. These prohibitions concern the use of AI systems that are deemed to pose unacceptable risks to fundamental rights and safety (prohibited AI practices are described below). 2 May 2025 : By this date, the European Commission must ensure that codes of practice are ready. These codes are expected to provide guidance on complying with various parts of the AI Act, specifically ensuring that AI providers and other stakeholders adhere to the required standards. 2 August 2025 : Several critical provisions take effect on this date: Notified bodies, General Purpose AI (GPAI) models, and governance rules (Chapter III, Section 4; Chapter V; Chapter VII) begin to apply. Provisions around confidentiality (Article 78) and penalties (Articles 99 and 100) also start. Providers of GPAI models, which were placed on the market before this date, must comply with the AI Act by 2 August 2027. Member States are required to submit their first reports on the financial and human resources of their national competent authorities by this date, and every two years thereafter. Member States must designate national competent authorities responsible for the oversight of AI systems, such as notifying and market surveillance authorities, and make their contact details publicly available. Member States are also expected to establish and implement rules on penalties and fines related to violations of the AI Act. If the codes of practice have not been finalized or are found inadequate by the AI Office, the European Commission can establish common rules for the implementation of obligations for providers of GPAI models via implementing acts. The European Commission will also begin its annual review of prohibitions and may amend them if necessary. 2026 2 February 2026 : The European Commission is required to provide guidelines that specify the practical implementation of Article 6, which relates to the classification of prohibited AI systems. 2 August 2026 : The majority of the EU AI Act's provisions will begin to apply to AI systems across the EU, with the exception of Article 6(1); it states that an AI system is classified as high-risk if it serves a critical safety function within a product, and that product must undergo external assessment to verify its compliance with safety regulations. Operators of high-risk AI systems (other than those covered under Article 111(1)) must comply with the AI Act if their systems were placed on the market or put into service before this date. Member States are required to ensure that they have at least one AI regulatory sandbox operational at the national level by this date. 2027 August 2, 2027: The obligations outlined in Article 6(1) of the AI Act start to apply. Providers of General Purpose AI (GPAI) models, which were placed on the market or put into service before August 2, 2025, are required to fully comply with the obligations laid out in the EU AI Act by this date. AI systems that are components of large-scale IT systems, listed in Annex X of the AI Act, and were placed on the market or put into service before August 2, 2027, must also comply with the AI Act by this date. However, they are given until December 31, 2030, for full compliance. 2028 2 August 2028: The European Commission is tasked with evaluating the functioning of the AI Office. 2 August 2028 (and every three years thereafter): The Commission will evaluate the impact and effectiveness of voluntary codes of conduct related to AI systems. These evaluations will help determine if further regulatory action is needed for voluntary codes to align with the goals of the Act. 2 August 2028 (and every four years thereafter): The Commission must evaluate and report on the necessity for amendments in several critical areas: Annex III : This annex lists the AI systems that require additional transparency measures. Article 50 : Pertains to transparency requirements for certain AI systems. Supervision and Governance Systems : The governance and supervision mechanisms are reviewed for potential adjustments or improvements. 2 August 2028 (and every four years thereafter): A report will be submitted to the European Parliament and the Council regarding the energy-efficient development of general-purpose AI models. This report aims to ensure that AI models are designed in a sustainable manner. 1 December 2028 (nine months prior to August 2029): By this date, the Commission must produce a report on the delegation of powers, as specified in Article 97 of the Act. 2029 1 August 2029: The Commission’s powers to adopt delegated acts (as defined in various Articles such as 6, 7, 11, 43, 47, 51, 52, 53) will expire unless extended by the European Parliament or the Council. The default is for these powers to be extended for recurring five-year periods unless opposed. 2 August 2029 (and every four years thereafter): The Commission is required to submit a report on the evaluation and review of the AI Act to the European Parliament and the Council. Beyond 2029 2 August 2030: Providers and deployers of high-risk AI systems that are intended for use by public authorities must comply with the obligations and requirements of the Act by this date. 31 December 2030: AI systems that are components of large-scale IT systems (as listed in Annex X) and were placed on the market before 2 August 2027 must be brought into compliance with the Act by this date. 2 August 2031: The Commission will assess the enforcement of the AI Act and submit a report to the European Parliament, the Council, and the European Economic and Social Committee. Purpose of the Regulation Harmonized Rules for AI Systems : The regulation establishes uniform rules for the marketing, operation, and use of AI systems across the EU. Prohibited AI Practices : The Act outright bans certain AI practices deemed unacceptable, such as manipulative or harmful AI. High-Risk AI Systems : Special provisions apply to AI systems classified as "high-risk," imposing stricter requirements on their development and deployment to mitigate potential harm. Transparency Requirements : The regulation mandates clear transparency rules for AI systems, particularly those that could significantly affect individuals, such as AI that interacts with humans or collects sensitive data. General-Purpose AI Models : The Act also covers general-purpose AI models, ensuring their safe placement in the market. Market Monitoring and Enforcement : The regulation sets out how AI systems will be monitored and regulated. Innovation Support : The Act specifically includes measures to foster innovation, with a focus on small and medium-sized enterprises (SMEs) and start-ups. Entities Covered Providers of AI systems : This regulation applies to any individual or company that sells or puts AI systems into service within the EU, regardless of whether it is based in the EU or outside it (third countries). Deployers of AI systems : These are entities using AI systems that are based in or located within the EU. Third-country providers and deployers : The regulation applies even if AI systems are deployed or provided from outside the EU if their output is used within the EU. Importers and distributors : Entities involved in importing or distributing AI systems within the EU. Manufacturers : Companies that integrate AI systems into their products and market them under their own name or trademark. Authorized representatives : If a provider outside the EU is not established in the Union, the authorized representatives who act on their behalf within the EU must comply with the regulation. Affected persons : The regulation includes protections for individuals in the EU affected by AI systems. An AI system is any machine-based system that operates with varying levels of autonomy. It is capable of receiving inputs, processing data, and producing outputs, which could be predictions, decisions, recommendations, or content. The key feature of AI systems is their ability to influence physical or virtual environments and adapt post-deployment. A general-purpose AI model is an AI model that is capable of performing a wide range of tasks and is typically trained on a large amount of data. These models exhibit a high degree of adaptability and can be integrated into various downstream applications or systems. A general-purpose AI system is based on a general-purpose AI model but serves multiple purposes. It can be used directly by end-users or integrated into other AI systems for diverse applications. This term captures AI systems with broader utility beyond a single, specialized function. A provider refers to any individual or entity (such as a public authority, agency, or legal body) that develops or has developed, an AI system or general-purpose AI model. The provider places this AI system or model on the market under their own name or trademark. Importantly, this applies whether the AI system is offered for payment or free of charge. A deployer is any person or organization that uses an AI system under their control. Deployers are different from providers, as they are responsible for using the AI system rather than creating or placing it on the market. Exemptions AI systems related to national security, defense, or military purposes are exempt from the regulation. This applies to systems regardless of the type of entity (public or private) developing or using the AI system. This exemption covers public authorities or international organizations using AI systems in law enforcement or judicial cooperation with the EU or its member states. However, such entities must ensure they offer adequate safeguards for the protection of individual rights and freedoms. The regulation does not affect the liability provisions related to intermediary service providers as outlined in Chapter II of Regulation (EU) 2022/2065 (the Digital Services Act). These providers typically act as platforms or hosts for third-party content or services. AI systems developed and used solely for scientific research and development are not subject to the regulation. However, this exclusion applies only if these systems are not marketed or used for commercial purposes within the Union. Real-world testing, though, is not covered by this exclusion, meaning such systems would still need to comply with relevant rules if tested in live environments. AI systems or models in the research, testing, or development stages are not covered by the regulation unless they are placed on the market or put into service. However, testing AI in real-world conditions would require compliance with the Act. Natural persons using AI systems purely for personal and non-professional purposes (e.g., using AI tools at home) are exempt from the regulation. AI systems released under free and open-source licenses are generally excluded unless they are placed on the market or deemed high-risk AI systems (or systems falling under Articles 5 or 50, which cover prohibited AI practices and transparency requirements). Prohibited AI Practices Subliminal or Manipulative Techniques: AI systems cannot be used if they deploy subliminal techniques (i.e., beyond an individual’s conscious awareness) to distort behavior in a way that significantly impairs a person’s ability to make informed decisions. This could involve psychological manipulation or deception, leading individuals or groups to make decisions they wouldn’t have otherwise made, causing harm. The prohibition covers both the intent and the effect of such techniques, especially if they cause significant harm, either physical, emotional, or financial. Exploitation of Vulnerabilities: AI systems are banned if they exploit the vulnerabilities of certain people or groups based on factors like age, disability, or social and economic circumstances. For instance, using AI to take advantage of an elderly person’s possible cognitive decline to push them into harmful decisions would fall under this prohibition. This applies to any situation where the AI distorts behavior and leads to significant harm. Social Scoring: AI systems used to evaluate or classify individuals based on their social behavior or personal characteristics over time, similar to a "social credit" system, are prohibited. Such systems cannot lead to unfair or detrimental treatment of people based on their social behavior, especially if the treatment is unrelated to the context in which the data was collected or if it is disproportionate to the actual behavior. Unrelated detrimental treatment : If someone is treated unfairly based on social behavior from one context being applied to another unrelated context (e.g., being denied a service based on past behavior in a different setting). Disproportionate treatment : If the treatment is excessively negative or unfair given the nature of the behavior being evaluated. Predicting Criminal Behavior: AI systems cannot be used solely to predict the risk of someone committing a crime based on profiling or personality traits. However, AI can be used to support human decision-making in criminal investigations as long as it is based on objective facts tied directly to the crime rather than assumptions from personality traits. Facial Recognition Database Creation: AI systems that build or expand facial recognition databases by scraping images from the Internet or CCTV without consent are prohibited. This also applies to the collection of facial images without clear, targeted authorization, especially for law enforcement or surveillance purposes. Emotion Inference in Sensitive Contexts: AI systems that infer emotions in workplaces or educational settings are generally prohibited unless they serve a medical or safety purpose. Biometric Categorization for Sensitive Attributes: AI systems are prohibited from categorizing people based on biometric data (like facial features) to infer sensitive personal attributes such as race, political beliefs, or sexual orientation. However, this prohibition doesn’t apply to law enforcement uses where biometric data has been lawfully acquired, such as for filtering or categorizing within law enforcement datasets. Real-Time Biometric Identification in Public Spaces: The use of AI for real-time remote biometric identification (such as facial recognition) in public spaces by law enforcement is generally prohibited, except in highly specific cases like: (i) Searching for specific victims of crimes like abduction or trafficking. (ii) Preventing imminent, severe threats like a terrorist attack. (iii) Identifying or locating individuals involved in serious criminal offenses punishable by at least four years in prison. The deployment of real-time biometric identification by law enforcement in public spaces must be strictly necessary and proportional to the harm that would occur without its use. It also requires an assessment of the consequences on individual rights and freedoms. High-Risk AI Systems Criteria An AI system is classified as high-risk if both of the following conditions are met: AI system as a safety component or product : The AI system is either: A critical safety component of a product, or A product itself is subject to Union harmonization legislation (laws that govern product safety in various sectors, such as medical devices or machinery). Third-party conformity assessment : The product or AI system must undergo a third-party assessment to ensure it complies with safety standards. This process is mandatory before the product or system can be sold or used, as required by the harmonization legislation. The AI Act also lists specific areas where AI systems are automatically considered high-risk due to their direct impact on people's lives and fundamental rights: Biometrics : AI systems used for remote biometric identification (e.g., facial recognition). AI categorizing individuals based on sensitive attributes like race, gender, or political beliefs. AI for emotion recognition. Critical Infrastructure : AI systems involved in managing essential services (e.g., electricity, water, road traffic). Education and Vocational Training : AI systems used in student admissions, evaluating learning outcomes, or monitoring student behavior. Employment : AI systems for recruitment, performance evaluation, promotion, or monitoring employee behavior. Access to Essential Services : AI determining eligibility for public services (e.g., healthcare, social benefits). AI assessing creditworthiness or evaluating life and health insurance risks. AI prioritizing emergency service responses or triaging patients in healthcare. Law Enforcement : AI systems used in criminal investigations for risk assessments (e.g., predicting reoffending). AI systems assessing evidence reliability or criminal profiling. Migration and Border Control : AI assessing security or health risks for individuals entering a country. AI assisting in asylum or visa applications and evaluating eligibility for immigration services. Judiciary and Elections : AI used by courts to assist in legal decision-making or evidence interpretation. AI systems influencing election outcomes or voting behavior. Exceptions to High-Risk Classification Certain AI systems listed above may not be classified as high-risk if they do not pose a significant risk of harm to health, safety, or fundamental rights. These exceptions apply if the AI system: Performs narrow procedural tasks (specific, limited functions). Improves the result of a previously completed human activity (assists but does not replace human decisions ). Detects decision-making patterns but does not influence or replace human judgments without proper review. Performs preparatory tasks for assessments but does not directly influence decision-making. Despite these exceptions, any AI system used for profiling individuals (assessing characteristics like personality traits or behavior) is always classified as high-risk. If a provider believes an AI system listed in Annex III of the AI Act does not meet the high-risk criteria, they must document their assessment and be prepared to present this justification to relevant authorities. Compliance Obligations for High-Risk AI Systems Compliance should consider: Intended Purpose: How the AI system is intended to be used. State of the Art: Current technological standards and best practices in AI and related technologies. Risk Management System: A risk management system must be in place as described below in detail. The risk management system is defined as a continuous, iterative process that spans the entire lifecycle of the AI system. It involves regular reviews and updates to adapt to evolving risks and technological changes. The risk management system ensures that all potential risks associated with the AI system are identified, assessed, and mitigated throughout its lifecycle. Steps Involved: Identification and Analysis of Risks: Known Risks: Risks that are already identified and understood. Reasonably Foreseeable Risks: Potential risks that can be anticipated based on current knowledge and usage scenarios. Focus Areas: Health, safety, and fundamental rights. Estimation and Evaluation of Risks: Assess the likelihood and impact of identified risks. Consider both intended use and conditions of reasonably foreseeable misuse. Evaluation of Other Risks: Analyze additional risks using data from post-market monitoring systems (Article 72 of the reguation). Ensure comprehensive risk assessment beyond initial identification. Adoption of Risk Management Measures: Implement targeted measures to address identified risks. Ensure that these measures are appropriate and effective. Risk management measures should: Account for Interactions: How different requirements and measures interact with each other. Minimize Risks Effectively: Achieve a balance that maximizes risk reduction while maintaining functionality. After implementing risk management measures, the remaining risks (residual risks) must be: Judged Acceptable: Based on established criteria and standards. Overall Residual Risk: The cumulative risk from all residual hazards should remain within acceptable limits. High-risk AI systems must undergo testing to identify appropriate risk management measures, ensure consistent performance for their intended purpose, and confirm compliance with the specified requirements. Testing procedures may include r eal-world conditions , providing a practical assessment of AI systems in environments that mimic actual usage scenarios and allowing for the identification of unforeseen risks that may not surface in controlled testing environments. Testing should be conducted throughout the development process and before the system is placed on the market or deployed. When implementing the risk management system, providers must consider: Impact on Minors: Potential adverse effects on individuals under 18. Other Vulnerable Groups: Groups that may be more susceptible to harm due to specific characteristics or circumstances. Data Governance Requirements for High-Risk AI Systems High-risk AI systems that involve training AI models must be developed using training, validation, and testing data sets that meet specified quality standards. These data sets must follow strict guidelines to ensure the AI system functions accurately and safely. Data governance refers to the set of policies and procedures governing how data is collected, processed, and managed. High-risk AI systems must have data management practices tailored to their specific purposes, including: (a) Design Choices : The AI system's design choices must reflect best practices in data handling and governance. (b) Data Collection and Origin : Clear documentation of how data is collected, its origin, and the original purpose of collection (especially for personal data). (c) Data Preparation : Data processing steps like annotation, labelling, cleaning, and aggregation must be documented. (d) Assumptions : The assumptions made about the data, especially regarding what it represents or measures, must be articulated. (e) Data Suitability : The quantity, availability, and relevance of the data must be assessed to ensure it is appropriate for the AI system's purpose. (f) Bias Assessment : Potential biases in the data that could impact safety or lead to discrimination must be thoroughly examined. (g) Bias Mitigation : Measures must be taken to detect, prevent, and correct biases. (h) Data Gaps : Any shortcomings or gaps in the data that could hinder regulatory compliance must be identified and addressed. Data Quality Standards for High-Risk AI Systems The data sets used for training, validation, and testing must meet several key criteria: Relevance : The data must be applicable to the AI system’s intended purpose. Representation : The data should be representative of the population or environment where the system will be deployed. Accuracy : Data must be as free of errors as possible and complete. Statistical Properties : Data must have suitable statistical characteristics, especially for systems intended to affect particular groups of people. Data sets must reflect the geographical, contextual, behavioral, or functional settings in which the AI system is intended to operate. This ensures that the AI system performs as expected in its real-world context. Handling Special Categories of Personal Data by High-Risk AI Systems In cases where it is necessary to detect and correct biases, special categories of personal data (e.g., racial or ethnic origin, political opinions, health data) may be processed, but only under strict conditions, such as: Necessity : No other data (e.g., anonymized or synthetic data) can achieve the same result. Security Measures : The data must be protected with state-of-the-art security measures (e.g., pseudonymization, strict access controls). Deletion : Special data must be deleted once biases are corrected or once the data retention period ends. Documented Justification : A clear record must explain why processing special data was necessary, including reasons why alternative data couldn’t be used. If an AI system doesn't rely on training data (e.g., rule-based systems), the governance and management practices described above apply only to testing data sets . Technical Documentation for High-Risk AI Systems Before placing a high-risk AI system on the market, providers must create technical documentation . This documentation must be continuously updated to demonstrate the system's compliance with the regulatory requirements. Small and medium-sized enterprises (SMEs) and start-ups may provide simplified versions of the required technical documentation. Record-Keeping (Logging) High-risk AI systems must allow for automatic event logging over the entire lifetime of the system. Logs provide an audit trail, helping to monitor the system’s performance and trace any malfunctions or issues. The logging capabilities should provide sufficient traceability to help in: Risk Identification : Detecting situations where the system may pose a risk or where substantial modifications have been made. Post-Market Monitoring : Supporting ongoing monitoring of the system once it is deployed. Operational Monitoring : Ensuring the system operates as intended during its use. For AI systems related to biometric identification (e.g., facial recognition), additional logging requirements apply: Usage Period : Record the start and end times of each use of the system. Reference Database : Document the database against which input data is checked. Input Data : Log the data that resulted in a match during the system’s use. Human Verification : Record the identity of any persons involved in verifying the system’s output, to ensure accountability and transparency. Transparency and Provision of High-Risk AI Systems High-risk AI systems must be designed to be transparent enough for deployers (those using the AI system) to understand and interpret the system’s outputs. The degree of transparency should align with the requirements and obligations of both the AI provider (the one who developed the system) and the deployer. AI systems must come with clear, concise, and accurate instructions for use . These should be provided in a digital or suitable format and must be easy for deployers to understand. The instructions must cover the following key areas: (a) Provider’s Information : The name and contact details of the provider and, if applicable, their authorized representative. (b) Performance Characteristics : Purpose : The intended use of the AI system. Accuracy, Robustness, and Cybersecurity : The levels of accuracy and cybersecurity tested and validated, as well as any circumstances that might affect performance. Risks : Any known risks to health, safety, or fundamental rights when the system is used as intended or under foreseeable misuse. Output Explanation : Technical capabilities that allow deployers to understand and explain the system's outputs. Performance with Specific Groups : How well the AI system performs for specific groups or individuals it is intended to serve. Input Data : Information about the data used in training, validation, and testing, particularly if it is relevant to the system's performance. Output Interpretation : Guidance on interpreting the system's output appropriately. (c) Predetermined Changes : Any changes in the system’s performance or design anticipated by the provider. (d) Human Oversight : Measures that help deployers interpret and monitor the system’s outputs effectively. (e) Resources and Maintenance : Information on the necessary hardware, computational resources, and maintenance schedules, including software updates. (f) Log Collection : Instructions on how to collect, store, and interpret logs. Human Oversight of High-Risk AI Systems High-risk AI systems must be designed with tools that allow natural persons (human operators) to oversee and intervene in the system's operation effectively. The goal of human oversight is to minimize risks related to health, safety, and fundamental rights that might arise during the AI system's use, including risks that could occur despite the application of other regulatory safeguards. Human oversight measures should match the risk level , the system’s autonomy , and its context of use . Oversight can be achieved through: (a) Built-in Measures : Measures integrated directly into the AI system by the provider to facilitate human oversight. (b) Deployable Measures : Measures that the provider specifies for the deployer to implement. The AI system must be designed so that human operators can: (a) Understand System Capabilities and Limitations : Operators should have a clear understanding of the system’s functions and be able to monitor its operation for anomalies or malfunctions. (b) Be Aware of Automation Bias : Operators should avoid over-relying on AI outputs, particularly for high-stakes decisions (e.g., medical diagnoses or legal judgments). (c) Interpret Outputs Correctly : The system should provide interpretation tools to help operators understand and assess the AI's output. (d) Override the System : Operators must be able to disregard or reverse the AI’s output if necessary, ensuring that the system doesn’t make irreversible decisions without human intervention. (e) Interrupt the System : Operators must have access to a ‘stop’ function that allows them to safely halt the system’s operation if required. Special Oversight for Biometric Systems For biometric AI systems, such as facial recognition, the Regulation requires additional verification steps: Before any action or decision is made based on the AI system’s identification, at least two qualified humans must verify the identification separately. Exceptions to this requirement apply in cases of law enforcement, migration, or border control where such a procedure is deemed disproportionate under national or EU law. Accuracy, Robustness, and Cybersecurity of High-Risk AI Systems High-risk AI systems must be designed to achieve and maintain high levels of accuracy , robustness , and cybersecurity throughout their lifecycle. The systems should be able to perform reliably under varying conditions and resist errors or faults. The accuracy and robustness of the system should be measurable. The system’s level of accuracy, along with relevant accuracy metrics, must be declared in the instructions for use . AI systems must be resilient to errors , faults , or inconsistencies in their environment or interactions with humans or other systems. This can be achieved through measures like technical redundancy , where the system includes backup plans or failsafe mechanisms to ensure continuous operation. For AI systems that continue to learn after deployment, safeguards must be in place to prevent feedback loops, where biased outputs could affect future inputs. Proper mitigation measures are required to avoid such biases. High-risk AI systems must be resilient against attacks or attempts by unauthorized third parties to exploit vulnerabilities, alter outputs, or manipulate system performance. These attacks might include: Data Poisoning : Attempts to corrupt the training data to alter the AI’s behavior. Model Poisoning : Manipulating pre-trained models used by the AI. Adversarial Examples : Feeding the system deceptive input designed to make it fail. Confidentiality Attacks : Attempts to exploit weaknesses in the system’s data handling to access sensitive information. Providers must implement measures to prevent, detect, and respond to these security risks, ensuring that the system remains secure and performs reliably throughout its lifecycle. Obligations of Providers of High-Risk AI Systems Compliance : Providers must ensure that their high-risk AI systems meet all regulatory requirements for safety, reliability, and ethical use. Provider Information : Providers must clearly display their name, trade name or trademark, and contact details either on the AI system, its packaging, or in accompanying documentation. Quality Management System (QMS) , that should include: A clear strategy for following regulations and handling assessments to prove compliance. Methods for system design, control, and verification. Procedures to test and validate the AI system throughout its development. Clear technical specifications and standards to ensure the system functions as required. Comprehensive data management processes for collecting, storing, and analyzing data used in the AI system. A risk management system to identify and mitigate potential risks. Systems to monitor the AI system after it’s released to the market, including reporting any serious incidents. Systems for managing communication with authorities, clients, and other stakeholders. Efficient documentation retention and resource management, including strategies to ensure continuity in the supply chain. A responsibility framework, clearly defining who is accountable within the organization. Documentation Keeping : Providers must retain essential documentation for 10 years after the AI system is made available. This includes technical details, quality management records, and any changes approved by regulatory bodies. If the provider goes out of business, arrangements must be made to keep this documentation accessible to authorities. Log Keeping : Providers must keep automatically generated logs from their AI systems for at least six months , or longer if required. Conformity Assessment : Before placing the AI system on the market or putting it into use, it must undergo a conformity assessment to ensure it meets the necessary legal and regulatory standards. EU Declaration of Conformity : Providers must create a declaration of conformity , confirming that the AI system complies with the relevant EU rules and standards. CE Marking : Providers must affix the CE marking on the AI system or its packaging. The CE mark shows that the system conforms to EU safety and performance regulations. Registration of AI System : Providers must register the AI system in the EU database before offering it in the market. Corrective Actions : If the AI system is found to be non-compliant or poses any risks, providers must take corrective actions immediately. This could involve fixing, recalling, or disabling the system. They must also notify distributors, clients, and relevant parties about the issue and the actions taken. Cooperation with Authorities : Providers must fully cooperate with national authorities by providing any necessary documentation or access to logs to prove the AI system’s compliance. Accessibility Compliance : High-risk AI systems must be designed to ensure accessibility , meaning they must be usable by people with disabilities in accordance with relevant EU directives. Incident Reporting and Post-Market Monitoring : Providers must monitor the AI system after it’s released to the market. If serious incidents occur, they must report these immediately and investigate any risks. Authorized Representatives of Providers of High-Risk AI Systems Providers of high-risk AI systems that are established in third countries (i.e., outside the EU) must appoint an authorized representative within the EU before making their high-risk AI systems available in the Union market. This appointment must be formalized through a written mandate , a legal document that defines the role and tasks of the representative. The representative acts on behalf of the provider and must fulfill the following key responsibilities: (a) Verify Conformity Documentation and Procedures The representative must ensure that: The EU declaration of conformity has been drawn up, which certifies that the high-risk AI system meets the requirements set out in relevant EU regulations. The technical documentation has been prepared, which provides detailed information about the design, development, and functionality of the AI system. The appropriate conformity assessment procedures have been carried out, ensuring the AI system complies with the legal standards before it is made available on the EU market. (b) Retain Documents for 10 Years The representative is responsible for keeping important documents, including: Contact details of the provider. A copy of the EU declaration of conformity. Technical documentation. If applicable, the certificate issued by a notified body (an organization designated to assess conformity). (c) Provide Information to Authorities Upon a reasoned request by a competent authority, the representative must provide all necessary information and documentation to demonstrate that the high-risk AI system is compliant with the relevant requirements. (d) Cooperate with Authorities to Reduce Risks The representative is required to cooperate with authorities if they take any actions to reduce or mitigate risks posed by the high-risk AI system. (e) Ensure Compliance with Registration Obligations The representative must ensure that the high-risk AI system is registered according to the regulations in Article 49(1), which require registration in the EU database for high-risk AI systems. If the provider carries out the registration, the representative must ensure that the information is accurate. The authorized representative has the right to terminate the mandate if they believe that the provider is acting in violation of the regulations. If this happens, the representative must immediately notify: The market surveillance authority (the body responsible for enforcing compliance with regulations). The notified body , if applicable, which would be involved if a certificate of conformity was issued. Obligations of Importers of High-Risk AI Systems Conformity Before Market Placement : Importers must verify that a high-risk AI system meets the requirements of the EU AI Act before placing it on the market. This includes ensuring the provider has followed the appropriate conformity assessment procedure outlined in Article 43. This procedure involves checking that the system complies with standards set in the regulation. If the provider applies harmonized standards (Article 40) or common specifications (Article 41), the system may undergo internal checks or a third-party evaluation (via a notified body). Technical Documentation : The importer needs to confirm that the provider has prepared the necessary technical documentation as required by Article 11 and Annex IV. Marking and Declaration : Importers must ensure the AI system bears the CE marking, which indicates it complies with EU safety standards, and is accompanied by the EU declaration of conformity as required by Article 47. Authorized Representative : The provider must appoint an authorized representative within the EU to handle regulatory matters if they are based outside the EU. Non-Conformity and Falsified Documentation : If an importer suspects a system is non-compliant or that its documentation is falsified, they must prevent its market entry until it is corrected. In cases where the system poses a risk (as outlined in Article 79), the importer must notify the provider, representative, and market surveillance authorities. Importer Identification : The importer must ensure that their name and contact details are visible on the AI system, its packaging, or accompanying documents. This is crucial for traceability. Storage and Transport : Importers are responsible for ensuring that the system's storage or transport conditions don’t compromise its compliance with the regulation. Retention of Documentation : Importers must retain a copy of the EU declaration of conformity, technical documentation, and certificate from the notified body (if applicable) for at least 10 years after the product enters the market. Cooperation with Authorities : Upon request from regulatory authorities, importers must provide all relevant information and documentation demonstrating compliance, including technical details. Obligations of Distributors of High-Risk AI Systems Before a distributor makes a high-risk AI system available on the market, they must ensure: The system has the CE marking —a sign that it complies with EU safety and legal standards. The system comes with a copy of the EU declaration of conformity (Article 47), which confirms that it meets the requirements set by EU regulations. The system has appropriate instructions for use . Both the provider and importer have complied with their responsibilities under the Regulation. If a distributor has any doubts, based on the information available to them, that a high-risk AI system is not compliant with the core technical requirements of the AI system, they are prohibited from releasing it until the system meets the necessary standards. If the system poses risks to health, safety, or fundamental rights), the distributor must notify the provider or importer. Distributors must ensure that, while the AI system is under their control (e.g., during storage or transport ), its compliance with the safety and legal requirements is not compromised. If the distributor finds, based on information available to them, that a high-risk AI system already placed on the market does not conform to the requirements, they must take the necessary corrective actions. These actions may include: Bringing the system into compliance, Withdrawing it from the market, Recalling it from consumers. If the system poses a risk, the distributor must immediately inform the provider or importer and the relevant authorities, detailing the issue and any corrective measures taken. Upon request from a competent authority , distributors must supply all relevant information and documentation proving their actions regarding the conformity of the high-risk AI system. Distributors must cooperate with relevant authorities in any action they take concerning high-risk AI systems they’ve made available on the market. Assumption of Provider Responsibilities Any distributor, importer, deployer, or other third party can be classified as a "provider" of a high-risk AI system, and thus subject to the obligations of a provider under Article 16 , in the following situations: (a) If they put their name or trademark on an already marketed high-risk AI system, they take on the role of the provider. Even if a contract assigns responsibilities differently, for regulatory purposes, they become the provider. (b) If they make a substantial modification to a high-risk AI system already on the market, such that it remains a high-risk AI system as defined by Article 6 , they are considered the new provider. (c) If they modify the intended purpose of an AI system (including general-purpose systems) that was not classified as high-risk, but as a result of the modification, the system becomes classified as high-risk under Article 6 , they assume the provider role. When one of the above circumstances occurs, the original provider who first placed the AI system on the market is no longer considered the provider for that specific system. The original provider must cooperate with the new provider by: Supplying the necessary technical information and access to ensure the new provider can meet their obligations under the regulation, especially for compliance assessments. However, if the original provider has explicitly stated that their AI system should not be modified to become high-risk, they are not obligated to provide this documentation. If a high-risk AI system forms part of a product covered by Union harmonization laws (as listed in Annex I, Section A ), the product manufacturer is considered the provider under these circumstances: (a) The system is marketed together with the product under the manufacturer’s name or trademark. (b) The system is put into service under the manufacturer’s name or trademark after the product is already on the market. This means that the product manufacturer must assume all obligations as the provider of the high-risk AI system. The provider of a high-risk AI system and any third party supplying AI tools, services, components, or processes used in the system must formalize an agreement that: Specifies the necessary information, technical capabilities, and assistance required to comply with the regulation. This rule does not apply to third parties who provide tools or services under a free and open-source license , unless the AI model itself is a general-purpose AI model. Obligations of Deployers of High-Risk AI Systems Compliance with Instructions for Use Deployers of high-risk AI systems must take necessary technical and organizational measures to ensure that the system is used according to the instructions provided by the system's creator or supplier. Human Oversight and Competence Deployers must assign natural persons (humans) to oversee the operation of these systems. These overseers must have the appropriate competence, training, authority, and support to handle the AI system responsibly. Control Over Input Data When deployers control the input data used by the AI system, they must ensure that the data is relevant and sufficiently representative for the intended purpose of the AI. Monitoring, Incident Reporting, and Risk Mitigation Deployers are obligated to monitor the operation of the high-risk AI system in line with the instructions provided. If there is any indication that the system may pose risks, they must inform the system provider or distributor and relevant authorities without delay. If a serious incident occurs, the deployer must immediately report it to the provider, importer, distributor, and market surveillance authorities. Log Keeping Deployers must retain automatically generated logs from the AI system for an appropriate period, with a minimum of six months, unless specified otherwise by national or Union law (particularly in data protection legislation). Workplace Information When a high-risk AI system is introduced into the workplace, deployers who are also employers must inform the workers and their representatives that such a system is being used. This transparency should align with relevant labor laws and practices. Registration for Public Authorities Deployers who are public authorities or entities within the Union must ensure that their high-risk AI systems are registered in the EU database referred to in Article 71. If the system is not registered, they cannot use it and must inform the provider or distributor. Data Protection Compliance Deployers must use the information provided under Article 13 of this regulation to comply with data protection impact assessments as required by GDPR (Article 35 of Regulation (EU) 2016/679) or law enforcement directives. Biometric Identification in Law Enforcement When law enforcement deploys a high-risk AI system for post-remote biometric identification (such as facial recognition), they must obtain judicial or administrative authorization before or shortly after its use. This authorization must occur within 48 hours, unless it’s for the initial identification of a potential suspect. If authorization is rejected, the use of the biometric system must stop immediately, and any related personal data must be deleted. The system cannot be used indiscriminately for law enforcement purposes without a specific link to a criminal case, investigation, or genuine threat. Law enforcement decisions cannot be based solely on AI output. Each use of such systems must be documented in the relevant police files and made available to market surveillance or data protection authorities upon request, excluding sensitive law enforcement data. Deployers must also submit annual reports on their use of these systems, although aggregated reports can cover more than one deployment. Informing Affected Persons When high-risk AI systems make decisions affecting individuals, deployers must inform the affected persons that they are subject to such AI decisions. For law enforcement use, this must comply with Article 13 of Directive (EU) 2016/680, ensuring transparency and protecting individuals' rights. Cooperation with Authorities Deployers are required to cooperate with competent authorities in any action related to the AI system's operation, helping authorities implement regulations and investigate compliance. Testing High-Risk AI Systems in Real-World Environments, Outside of Regulatory Sandboxes Scope of Testing in Real-World Conditions High-risk AI systems can be tested in real-world conditions, outside of regulatory sandboxes. Providers or prospective providers of these systems can conduct testing, including submitting a real-world testing plan. However, these tests must comply with Article 5 , which may include prohibitions on certain uses of AI (for example, potentially harmful applications). The European Commission will further define what the real-world testing plan should include through "implementing acts" (which are detailed legal measures to implement legislation). National or Union law concerning product testing (e.g., products covered by EU harmonization laws) still applies to these systems. Timing and Conduct of Testing Providers can conduct real-world tests before the AI system is placed on the market or put into service. Testing can be done either by the provider alone or in partnership with deployers (entities or individuals who implement or use the system). The testing must respect any ethical review requirements laid down by national or Union law, ensuring ethical standards are maintained. Conditions for Testing in Real-World Conditions Testing can only proceed if the following conditions are met: (a) Testing Plan Submission : A real-world testing plan must be drawn up and submitted to the market surveillance authority in the country where the testing will occur. (b) Approval by Authorities : The surveillance authority must approve both the testing and the plan. If they don’t respond within 30 days, the plan is considered automatically approved. Some national laws may prevent such "tacit approval," in which case explicit authorization is required. (c) Registration Requirements : Testing must be registered with a Union-wide unique identification number. Specific systems, such as those related to law enforcement, migration, and border control (Annex III points 1, 6, 7), must be registered in a secure non-public section of the EU database for privacy and security reasons. (d) Union-Based Legal Representation : Providers must be established in the EU or appoint a legal representative within the EU. (e) Data Transfers : Data collected during the testing can only be transferred to non-EU countries if they comply with appropriate Union law safeguards. (f) Duration of Testing : Testing can last up to six months, with a possible extension of another six months, but only if justified and pre-notified to the market surveillance authority. (g) Protection of Vulnerable Groups : Extra care must be taken to protect individuals belonging to vulnerable groups, such as those with disabilities or age-related vulnerabilities. (h) Deployers' Awareness and Agreement : If testing involves deployers, they must be informed of all relevant details. A formal agreement between the provider and deployer must specify roles and responsibilities, ensuring compliance with applicable laws. (i) Informed Consent : Subjects involved in the testing must give informed consent (unless it's law enforcement-related testing where consent could interfere with the test). In such cases, the test must not negatively impact individuals, and any personal data must be deleted afterward. (j) Oversight : Testing must be overseen by qualified personnel from the provider and deployer, ensuring compliance with testing regulations. (k) Reversibility of AI Predictions : The outcomes of the AI system (predictions, recommendations, decisions) must be capable of being reversed or disregarded. Rights of Subjects in Testing Testing requires obtaining informed consent from individuals participating: Consent must be freely given and informed . Participants must receive clear, concise information about the testing's nature, objectives, and any inconveniences. Participants must be informed about their rights, such as the ability to refuse participation or withdraw without facing any detriment. They must be told how to request a reversal or disregarding of the AI system’s outputs. Consent must be documented, dated, and a copy provided to the participant or their legal representative. Participants in the testing, or their representatives, have the right to withdraw consent at any time without facing any consequences. They can also request the deletion of their personal data, but withdrawal does not affect activities already conducted. Incident Reporting Any serious incidents occurring during testing must be reported to the market surveillance authority. Providers must take immediate mitigation measures , or, if necessary, suspend or terminate the testing. Providers must also have a procedure for recalling the AI system in case of such terminations. Notifying Authorities Providers must notify the national market surveillance authority about any suspension or termination of the testing and provide the final outcomes. Fundamental Rights Impact Assessment (FRIA) for High-Risk AI Systems Who is required to perform the FRIA? Deployers of high-risk AI systems referred to in Article 6(2), such as public bodies or private entities providing public services, are obligated to perform an FRIA. High-risk AI systems in areas such as biometrics, education, law enforcement, and administration of justice are specifically targeted. However, certain AI systems, like those used in critical infrastructure (e.g., energy, water, traffic), are exempt. What does the FRIA involve? A description of how the AI system will be used and the context. Identification of the individuals or groups likely to be affected. Evaluation of risks, particularly concerning harm to fundamental rights, and measures for human oversight. A plan for mitigating risks and handling complaints. When must the FRIA be updated? FRIA applies to the first deployment of a high-risk AI system. However, if circumstances change—such as updates to the system or changes in its use—the FRIA must be revised to reflect the new situation. Data Protection Impact Assessments (DPIA) If a Data Protection Impact Assessment (DPIA) has already been conducted under GDPR (which covers data protection rights), the FRIA will complement it, focusing on a broader set of fundamental rights beyond just data protection. Notification and template use Once the FRIA is completed, the deployer must notify the market surveillance authority. The European AI Office will provide a template questionnaire to streamline this process for deployers. Conformity Assessment for High-Risk AI Systems Options for Conformity Assessment (Annex III, Point 1) Harmonized Standards (Article 40): Providers must choose between two options if they apply harmonized standards or common specifications (Article 41). Internal Control : This method is described in Annex VI, allowing the provider to internally assess compliance through predefined procedures. Quality Management System (QMS) : If providers opt for this route, it involves a notified body to evaluate the system’s quality management and technical documentation, as detailed in Annex VII. Exceptions : If harmonized standards are unavailable or only partially applied, the provider must follow the procedure in Annex VII, which mandates a third-party notified body to ensure compliance. Conformity for Other High-Risk Systems (Annex III, Points 2 to 8) For AI systems in sectors such as biometrics , education , and law enforcement , the internal control process (as outlined in Annex VI) applies, without the need for external notified bodies. Substantial Modifications and Learning Systems A new conformity assessment is required if the AI system undergoes significant changes. However, if the system continues learning within predefined limits, no additional assessment is necessary. Exceptional Authorization for Public Security or Health Reasons Market Surveillance Authorities can authorize high-risk AI systems to be placed on the market for exceptional reasons (e.g., public security, protection of life, environmental protection, or safeguarding critical infrastructure) within a Member State. This is a temporary measure while the full conformity assessment is completed. The derogation is allowed only for a limited time, and the assessment process must proceed without undue delay. In urgent situations, such as an imminent threat to public safety , law enforcement or civil protection authorities can use a high-risk AI system without prior authorization . They must, however, apply for the required authorization either during or immediately after the AI system’s use. If the authorization is denied, the use of the system must cease immediately, and any data or results from its usage must be discarded. Market surveillance authorities can only issue the authorization if they conclude that the high-risk AI system complies with the fundamental requirements of Section 2 of the AI Act, which covers safety and fundamental rights. After granting authorization, the market surveillance authority must notify the European Commission and other Member States about the authorization. Sensitive operational data, particularly from law enforcement, is excluded from this reporting. If no Member State or the Commission objects to the authorization within 15 days , the authorization is considered justified. If a Member State or the Commission objects within 15 days, consultations between the Commission and the Member State that issued the authorization are initiated. The relevant stakeholders, including AI system providers, are allowed to present their views. The Commission then decides if the authorization is justified based on the consultations and informs the relevant parties of its decision. If the Commission finds that the authorization was unjus tified, the market surveillance authority of the Member State must withdraw the authorization. EU Declaration of Conformity of High-Risk AI System The provider of a high-risk AI system is required to create a written EU declaration of conformity . This document must be machine-readable , either physical or electronically signed, and retained for 10 years after the system is placed on the market. The declaration must state that the AI system complies with the requirements in Section 2 of the Act. It should contain specific information as outlined in Annex V and be translated into a language that is easily understood by the authorities in the Member States where the system is marketed. If the AI system is also subject to other Union harmonisation legislation , the provider can prepare a single EU declaration of conformity covering all applicable legal frameworks. This helps streamline compliance by consolidating all relevant regulatory requirements into one document. By issuing the EU declaration, the provider assumes full responsibility for ensuring that the AI system meets the compliance standards set out in Section 2. The declaration must be kept up to date, reflecting any changes in the system's status or updates to its compliance. Registration of AI Systems Before placing a high-risk AI system on the market, the provider (or their authorized representative) must register themselves and their system in the EU database (as specified in Article 71 ). This applies to high-risk AI systems listed in Annex III , except for those in point 2 of Annex III (related to educational or vocational training). Providers who conclude that their AI system is not high-risk (under Article 6(3) ) must also register both the system and themselves in the EU database. For specific high-risk AI systems related to law enforcement , migration , asylum , and border control (Annex III, points 1, 6, and 7), the registration process must take place in a non-public section of the EU database. High-risk AI systems listed under point 2 of Annex III (primarily those related to education or vocational training) must be registered at the national level , rather than in the EU database. Post-Market Monitoring of High-Risk AI Systems Providers of high-risk AI systems must create and document a post-market monitoring system . The system needs to be proportional to the nature of the AI system and the specific risks associated with it. The "proportionate" aspect means the complexity of the monitoring system should match the complexity and risk of the AI system. For example, an AI used in medical diagnostics may need more detailed and continuous monitoring compared to an AI used for less critical tasks like customer service automation. Key Points: The monitoring system must actively and systematically collect, document, and analyze relevant data. Data can come from deployers (those who actually use the AI system in real-world applications) or from other sources. This data collection spans the entire lifetime of the AI system . Providers must ensure the AI system’s continuous compliance with the legal requirements laid out in Chapter III, Section 2 (which refers to specific safety, transparency, and ethical standards). Monitoring should also include analysis of interactions between the AI system and other AI systems, if relevant. There is an exemption for law enforcement authorities —providers do not need to monitor sensitive operational data from these bodies. The idea is that providers should not just launch their AI system and forget about it. They need to constantly gather information about how well the system is performing, whether it continues to meet safety and compliance standards, and if it interacts with other AI systems in a way that could affect safety or performance. The monitoring system must be backed by a post-market monitoring plan , which is included in the technical documentation (described in Annex IV). The European Commission will adopt an implementing act (a type of regulatory document) by 2 February 2026 . This will provide a template for the monitoring plan and list all the elements that need to be included in it. For high-risk AI systems that are already covered by Union harmonization legislation (other EU laws that require monitoring), providers have the option to integrate their AI monitoring system into the already existing monitoring frameworks, where possible. Providers can incorporate the requirements described above into their existing monitoring systems, provided they achieve an equivalent level of protection . This flexibility also applies to high-risk AI systems used by financial institutions , which are already subject to specific governance and compliance rules under EU financial services law. Transparency in AI Systems AI systems interacting with natural persons Providers of AI systems designed to directly interact with people must ensure that users know they are interacting with an AI system, unless it is obvious to a reasonably well-informed, observant, and cautious person based on the circumstances. AI systems used for law enforcement purposes (e.g., detecting, preventing, investigating, or prosecuting criminal offenses) are exempt from this transparency rule, as long as safeguards for individual rights and freedoms are in place. However, if these systems are used to allow the public to report crimes, the obligation to inform users applies. AI-generated or manipulated content Providers of AI systems that generate synthetic audio, image, video, or text content must: Mark the outputs of these systems as artificially generated or manipulated in a machine-readable format, ensuring they are detectable as such. Ensure the marking technology is effective, interoperable, robust, and reliable , considering the nature of the content, cost, and current technological standards. This obligation does not apply if the AI system is used for minor edits or enhancements (e.g., AI-assisted photo filters) that do not significantly alter the content. It also does not apply to AI systems used for law enforcement purposes like criminal investigations. Transparency in emotion recognition and biometric categorization systems Deployers of AI systems that recognize emotions or categorize individuals based on biometric data must inform individuals that these systems are in use. Additionally, they must comply with relevant privacy laws: Regulations (EU) 2016/679 (GDPR) Regulations (EU) 2018/1725 Directive (EU) 2016/680 AI systems used for law enforcement purposes (e.g., criminal investigations) are exempt from this transparency obligation, provided safeguards for rights and freedoms are maintained. Disclosure of deep fakes and AI-generated text For AI systems generating or manipulating deep fakes (synthetic or manipulated image, audio, or video content), deployers must disclose that the content is artificially created. Exceptions: This obligation does not apply to law enforcement purposes. For artistic, creative, satirical, or fictional works (e.g., a movie using AI-generated special effects), transparency obligations are relaxed. The only requirement is that there be some disclosure of AI-generated content, but it must not interfere with the artistic experience. For AI systems generating text meant to inform the public on important matters, deployers must disclose that the text was generated or manipulated by AI unless: The text is subject to human review or editorial control, and someone takes legal responsibility for the publication. The system is used for law enforcement purposes. Timing and clarity of information disclosure The required information must be communicated clearly and in a manner that is easy to distinguish for the individuals concerned. This disclosure must happen at the first interaction or exposure to the AI system or its content. Additionally, any accessibility requirements (e.g., for individuals with disabilities) must be taken into account. General-Purpose AI Models A general-purpose AI model is defined as an AI model that: Is trained with a large amount of data, often using self-supervision (an approach where the AI learns from unlabeled data at scale). Shows significant generality , meaning it can perform a wide range of distinct tasks competently, regardless of how it is distributed or marketed. Can be integrated into various downstream systems or applications , making it highly flexible and adaptable for different uses. This definition excludes AI models that are only used for research, development, or prototyping purposes before being placed on the market. General-purpose AI models, such as those behind language models (like GPT), image generators, and recommendation engines, are versatile and can be adapted for numerous applications across industries. A general-purpose AI system is based on a general-purpose AI model and can serve a variety of direct purposes or be integrated into other AI systems . This system could, for example, power applications like customer service chatbots, image recognition systems, or predictive analytics. The key distinction here is that a general-purpose AI system is an actual functional system built upon a general-purpose AI model, making the model more specific or tailored to particular tasks or industries. Obligations for Providers of General-Purpose AI Models (a) Technical Documentation Providers must maintain up-to-date technical documentation of the AI model. This includes details on the training and testing processes, as well as the results of evaluations. The documentation must meet the requirements laid out in Annex XI and be made available upon request to the AI Office or national authorities. All general-purpose AI model providers must include the following: General Description of the Model, covering: Tasks the model is designed for and types of AI systems it can be integrated into. Acceptable use policies. Release date and distribution methods. Architecture and number of parameters. Input/output modalities (e.g., text, image). Licensing information. Detailed Model Development Information, including: Technical means required for integration into other systems. Model design specifications, training methodologies, key design choices, and objectives. Data used for training, testing, and validation, including its source, characteristics, and measures to mitigate biases. Computational resources used (e.g., floating point operations), training time, and relevant details. Known or estimated energy consumption during training. (b) Information for AI System Providers Providers must prepare and update documentation that helps other providers who intend to integrate the general-purpose AI model into their own AI systems. This documentation should: Enable these AI system providers to understand the capabilities and limitations of the AI model. Help the AI system providers comply with their own regulatory obligations. Contain the minimum information as required in Annex XII: Tasks the model is designed to perform and types of AI systems it can be integrated into. Acceptable use policies. Release date and distribution methods. Interaction with external hardware or software (if applicable). Relevant software versions (if applicable). Model architecture and number of parameters. Input/output modalities (e.g., text, image) and formats. Licensing information for the model. Technical means (instructions, tools, infrastructure) required for integration. Inputs and outputs modalities, formats, and maximum size (e.g., context window length). Information on data used for training, testing, and validation, including data type, provenance, and curation methods. (c) Copyright Policy Providers must have a policy in place to comply with EU copyright laws, especially ensuring that their AI systems respect any reservation of rights expressed under Article 4(3) of Directive (EU) 2019/790, which deals with copyright and related rights in the digital single market. (d) Public Summary of Training Data Providers must publish a summary of the content used to train the AI model, using a template provided by the AI Office. This summary should give sufficient details about the training data, providing transparency while protecting proprietary data where necessary. Exemptions for Open-Source AI Models The obligations above do not apply to AI models released under a free and open-source license, provided: The AI model’s parameters, architecture, and usage information are publicly available. Note: This exemption does not apply to general-purpose AI models that pose systemic risks (as defined under Article 51). Use of Codes of Practice and Harmonized Standards Providers may rely on codes of practice (Article 56) or European harmonized standards to demonstrate compliance with the obligations under this regulation. Until a harmonized standard is published, providers can use these codes to show conformity. Providers who do not adhere to an approved code or standard must demonstrate compliance through alternative means, subject to Commission review. Authorized Representatives of Providers of General-Purpose AI Models Providers of general-purpose AI models established outside the EU must appoint an authorized representative based in the EU before placing their models on the Union market. The provider must empower the authorized representative to perform tasks outlined in the mandate, such as ensuring the model complies with relevant regulations. The authorized representative must carry out specific tasks as per the mandate, including: The authorized representative must ensure that the technical documentation (Annex XI) is properly drawn up and that all regulatory obligations under Article 53 are fulfilled. The representative must keep a copy of the technical documentation for 10 years after the AI model has been placed on the market. They must also keep the provider’s contact details on file. The representative must provide documentation to the AI Office or national authorities upon reasoned request to demonstrate compliance with the regulation. The representative must cooperate with the AI Office and other competent authorities in any actions related to the AI model, including its integration into downstream AI systems. The authorized representative can be addressed by the AI Office or authorities, in addition to or instead of the provider , on all issues related to ensuring compliance with this regulation. If the authorized representative believes the provider is acting contrary to their obligations, they must terminate the mandate and immediately inform the AI Office with reasons for the termination. The obligation to appoint an authorized representative does not apply to providers of general-purpose AI models released under a free and open-source license , unless the model presents systemic risks . General-Purpose AI Models with Systemic Risk An AI model will be classified as having systemic risk if it meets one of the following conditions: (a) High Impact Capabilities: The model is evaluated using technical tools and methodologies, including benchmarks and indicators, to determine if it has a high impact. A high impact model can affect significant societal areas like privacy, safety, democracy, or economic systems. (b) Commission Decision: The European Commission, either on its own or after receiving an alert from a scientific panel , can designate an AI model as having systemic risk if it deems the model to have capabilities or impacts similar to those described in point (a). The criteria for this assessment are laid out in Annex XIII of the regulation. Criteria for Designating General-Purpose AI Models with Systemic Risk (a) The Number of Parameters of the Model Parameters are the variables within an AI model that are learned during the training process. The number of parameters is a key indicator of the model’s complexity and capacity to learn and process information. Large models, like modern large language models (e.g., GPT-4), can have billions or even trillions of parameters, making them highly powerful and capable of handling a wide variety of tasks. More parameters often mean the model can have broader impacts, as it is capable of understanding and generating more nuanced or complex outputs. (b) The Quality or Size of the Dataset This refers to the dataset used to train the AI model, specifically its size and quality . Size can be measured in terms of the number of tokens (e.g., words or data points) used in training. Quality refers to the relevance, accuracy, and diversity of the data. High-quality data can make a model more effective and versatile. A large, high-quality dataset generally enables the model to generalize better across different tasks, potentially increasing its impact and risk due to broader applicability. (c) The Amount of Computation Used for Training This criterion looks at the computational resources required to train the AI model, which are measured in floating point operations (FLOPs) —a standard metric for computational intensity. Other indicators of computational effort include: Estimated cost of training : Training large AI models often requires significant financial resources. Training time : Long training periods imply that the model is processing vast amounts of data and computations. Energy consumption : Training large models can consume enormous amounts of energy, raising concerns about environmental impact. An AI model is presumed to have high-impact capabilities if the amount of computation used for training exceeds 10^25 floating-point operations (FLOPs), which indicates large-scale computational resources and complexity. (d) Input and Output Modalities This criterion focuses on the modalities (types of inputs and outputs) the AI model can handle. Modalities include: Text to text : Like large language models that take text input and generate text output (e.g., GPT models). Text to image : Models that generate images based on text prompts (e.g., DALL·E). Multi-modality : Models capable of handling different types of inputs and outputs simultaneously, such as combining text, image, audio, and video processing. Biological sequences : Specialized AI models that process biological data, such as genetic sequences. (e) Benchmarks and Evaluations of the Model’s Capabilities This criterion refers to the performance benchmarks used to evaluate the AI model’s capabilities, including: The number of tasks it can perform without needing further training (showing versatility and generality). Adaptability to new tasks: How easily the model can be fine-tuned or retrained to handle new, distinct tasks. Autonomy : The model’s ability to operate independently without continuous human oversight. Scalability : How well the model’s capabilities scale as it is deployed in different environments or across different industries. Tools access : If the model has access to external tools (e.g., APIs), it may enhance its capabilities further, increasing its potential impact. (f) High Impact on the Internal Market This criterion assesses the model’s reach within the European Union, particularly its availability to businesses. A model will be presumed to have a high impact on the EU’s internal market if it has been made available to at least 10,000 registered business users . (g) The Number of Registered End-Users The number of end-users is another important factor in assessing the model’s overall impact. A large user base indicates that the model has extensive reach, which means it could affect a broad range of people, businesses, or industries. This widespread adoption heightens the model's potential to create societal or economic risks . The European Commission is empowered to adopt delegated acts (legally binding acts) to: Amend the thresholds (such as the 10^25 FLOP requirement) based on advances in technology, like algorithmic improvements or hardware efficiency. Update benchmarks and indicators to ensure that risk assessments keep up with the evolving capabilities of AI systems. Procedures for Managing Systemic Risk AI Models If a provider develops a general-purpose AI model that meets the high-impact criteria, they must notify the European Commission within two weeks of becoming aware that the criteria have been met. This notification must include evidence that the model meets the high-impact requirement. Additionally, if the Commission learns about a high-risk AI model that hasn't been reported, it has the authority to classify it as an AI model with systemic risk on its own. The provider can present arguments to show that, despite meeting the technical threshold for systemic risk (e.g., the FLOP threshold), the model does not pose systemic risks due to its specific characteristics. This could happen, for example, if the model is tightly controlled or used in a manner that mitigates the risk. However, these arguments must be well-substantiated. If the Commission finds that the provider’s arguments are not sufficiently convincing , the model will remain classified as having systemic risk. The decision to classify the model will be based on the failure to demonstrate that the model's unique characteristics mitigate the potential risks. Providers can request a reassessment of the systemic risk designation, but this can only happen six months after the initial designation. The provider must present new and objective reasons for the reassessment. The European Commission will maintain and publish a list of general-purpose AI models that are classified as having systemic risk. This list will be kept up to date, but the publication must respect intellectual property rights, business confidentiality, and trade secrets , as required by EU and national laws. Obligations for Providers of AI Models with Systemic Risk Beyond the general duties mentioned above, the specific obligations are: Model Evaluation with Adversarial Testing AI providers must assess their models using standardized protocols and state-of-the-art tools. This involves testing the AI system against potential attacks or attempts to manipulate it. For example, an adversarial test might simulate a scenario where someone tries to trick an AI system into making incorrect decisions. The goal is to identify vulnerabilities and mitigate the risks associated with them. Risk Assessment at Union Level Providers must assess systemic risks at the EU level, considering the impact the AI model might have within the European Union. These could range from disruptions in critical infrastructure (e.g., energy grids) to widespread misinformation or privacy violations. Incident Reporting Providers must keep track of serious incidents and report them without delay to relevant authorities, including the AI Office and, if necessary, national authorities. Serious incidents could involve unexpected failures, malicious use, or significant impacts on public safety. The provider must also document and implement corrective measures to address these incidents. Cybersecurity Protection Providers are responsible for ensuring that the AI model and its physical infrastructure (e.g., servers and databases) are adequately protected from cyber threats. This could include encryption, access controls, regular security audits, and intrusion detection systems. Codes of Practice and Harmonised Standards Providers can rely on codes of practice (voluntary guidelines or industry standards) to meet their obligations until formal EU-wide harmonised standards are published. Harmonised standards: These are official, EU-endorsed technical specifications that provide a benchmark for compliance. Once these standards are published, providers who follow them are assumed to be compliant with the law. If a provider doesn't follow a code of practice or a harmonized standard, they must demonstrate an alternative way of complying with the requirements, subject to approval by the European Commission. Additional Information for Models with Systemic Risk For general-purpose AI models classified as having systemic risk , additional details are required in the technical documentation of the AI model : Evaluation Strategies : Description of evaluation criteria, results, and limitations, using public or internal evaluation methods. Adversarial Testing : Details on internal or external testing (e.g., red teaming), model adaptations, alignment, and fine-tuning processes. System Architecture : Explanation of how software components work together within the model. AI Regulatory Sandboxes Establishment of AI Regulatory Sandboxes at the National Level Each Member State is required to set up at least one AI regulatory sandbox by 2 August 2026 . These sandboxes provide controlled environments for AI developers to experiment with new AI systems under regulatory supervision before entering the market. A sandbox can be established jointly with other Member States' competent authorities. This collaboration helps smaller states or regions pool resources and knowledge to support AI development. Participation in an existing sandbox is acceptable if it provides national coverage comparable to a standalone sandbox. Regional and Cross-Border Sandboxes Beyond the national level, additional sandboxes may be established at regional or local levels or in cooperation with other Member States. AI Regulatory Sandboxes for EU Institutions The European Data Protection Supervisor (EDPS) has the authority to create AI regulatory sandboxes specifically for EU institutions, bodies, offices, and agencies . The EDPS can fulfill the roles and tasks of national competent authorities in these cases, ensuring compliance with the AI Act for Union-level entities. Structure and Purpose of AI Regulatory Sandboxes AI regulatory sandboxes provide a controlled environment where AI systems can be developed, tested, and validated under supervision. These tests can involve real-world conditions and aim to foster innovation while identifying and mitigating risks, particularly regarding fundamental rights, health, and safety. The sandbox operates for a limited time under a pre-agreed plan between the AI provider and the supervising authority. Documentation and Exit Reports Upon completing their participation in the sandbox, AI providers will receive written proof of the activities carried out. Authorities will issue an exit report , detailing the results and lessons learned during the sandbox process. These documents can then be used by providers to demonstrate their compliance with the AI Act in the conformity assessment process or other market surveillance activities. The reports may also accelerate regulatory approvals. Access to Exit Reports and Confidentiality While exit reports are generally confidential, the European Commission and the AI Board may access them to aid in regulatory oversight. If both the AI provider and the national authority agree, exit reports may also be made public to promote transparency and share knowledge within the AI ecosystem. Objectives of the AI Regulatory Sandboxes The sandboxes aim to: Improve legal certainty by helping AI developers understand and comply with the AI Act. Foster best practice sharing among authorities. Encourage innovation and strengthen the AI ecosystem within the EU. Contribute to evidence-based regulatory learning to improve future AI regulations. Facilitate faster market access for AI systems, especially for SMEs and start-ups. Coordination with Data Protection Authorities If AI systems in the sandbox involve the processing of personal data or require supervision from other national authorities, data protection agencies and other relevant bodies must be involved in the sandbox to ensure compliance with applicable data protection laws. Risk Mitigation and Authority Supervision Competent authorities have the power to suspend sandbox activities if significant risks to health, safety, or fundamental rights are detected, especially if no effective mitigation measures can be implemented. Liability and Protection for AI Providers AI providers participating in sandboxes remain liable under applicable Union and national laws for any damages caused during sandbox testing. However, if providers follow the agreed sandbox plan and comply with guidance from the supervising authority, they will not face administrative fines for infringements related to the AI Act or other laws overseen within the sandbox. Centralized Platform and Stakeholder Interaction The European Commission will create a centralized interface for AI regulatory sandboxes. This platform will provide relevant information and allow stakeholders to interact with authorities, seek regulatory guidance, and monitor sandbox activities. It will help streamline communication and foster a collaborative environment for AI innovation across the EU. Uniformity Across the Union The European Commission will adopt implementing acts to ensure that the setup, operation, and supervision of AI regulatory sandboxes are consistent across all Member States, to prevent fragmentation and confusion. Eligibility and selection criteria will be transparent and fair. This means any provider or prospective provider of an AI system who meets the set criteria can apply for participation in a sandbox. National authorities will inform applicants of their decision within three months , ensuring a predictable timeline. Broad Access AI sandboxes will be open to partnerships between providers, deployers, and other relevant third parties. This broadens opportunities to collaborate with other stakeholders in the AI ecosystem, such as SMEs, researchers, and testing labs. Importantly, participation in one Member State’s sandbox will be mutually recognized across the EU . SMEs and start-ups can participate in the sandbox free of charge , except for any exceptional costs that authorities may recover fairly. Focus on Testing Tools and Risk Mitigation The sandboxes will facilitate the development of tools to assess key aspects of AI systems, such as accuracy, robustness, cybersecurity , and other dimensions important for regulatory learning. Authorities will assist in developing measures to mitigate risks to fundamental rights and societal impacts, helping ensure that a system aligns with EU values and safety standards. If an AI system needs testing in real-world conditions , this can be arranged within the sandbox. However, such testing will require specific safeguards agreed upon with national authorities to protect fundamental rights, health, and safety . Cross-border cooperation may also be required to ensure consistent practices in real-world testing. Supervision of Testing in Real World Conditions Surveillance authorities are responsible for ensuring that real-world testing of AI systems is conducted in compliance with this regulation. When AI systems are tested in regulatory sandboxes (controlled environments for testing new technologies), surveillance authorities ensure compliance with specific rules and may allow certain exceptions during testing. Authorities can suspend, terminate, or modify real-world testing if serious issues are detected or if the testing does not comply with Articles 60 and 61 (concerning testing conditions and risk management). These decisions must be justified and can be challenged by the provider. Support Measures for Small and Medium-sized Enterprises (SMEs), including Start-Ups Member State Actions Priority Access to AI Regulatory Sandboxes : SMEs, including start-ups, with a registered office or branch in the EU, are given priority access to AI regulatory sandboxes, assuming they meet the eligibility conditions and selection criteria. This priority, however, does not exclude other SMEs or start-ups from access, provided they also meet the criteria. Awareness Raising and Training : Member States are tasked with organizing specific awareness campaigns and training programs on how this regulation applies to SMEs and start-ups. Communication Channels : Existing channels or newly created ones should be used to facilitate communication between SMEs, start-ups, deployers, and local authorities. Standardisation Process Participation : Member States should help SMEs participate in standardisation processes. Standardisation refers to the creation of uniform technical specifications, which helps ensure that products and services are consistent across the EU, fostering innovation and safety. Fee Adjustments for SMEs When it comes to conformity assessments (referred to in Article 43 of the regulation), the fees are adjusted to account for the specific needs and characteristics of SMEs and start-ups. Factors such as the size of the company, market presence, and other relevant indicators are used to proportionally reduce the fees. AI Office Actions Standardised Templates : The AI Office should provide standardized templates that help SMEs and others meet their regulatory obligations. Information Platform : A single, user-friendly information platform should be developed for all operators in the EU. Communication Campaigns : The AI Office is tasked with raising awareness through campaigns to inform companies about their obligations under the AI regulation. Public Procurement Best Practices : The office is also responsible for promoting best practices in public procurement processes when it comes to acquiring AI systems. Simplified Compliance for Microenterprises Microenterprise, as defined by the 2003/361/EC Recommendation, is an enterprise which employs fewer than 10 persons and whose annual turnover and/or annual balance sheet total does not exceed EUR 2 million. These companies are granted a simplified approach to comply with certain elements of the quality management system required by the regulation under Article 17 . These microenterprises can adopt a more straightforward version of the quality management system to meet the regulation's requirements. The European Commission will issue guidelines outlining which elements of the system can be simplified, making it easier for smaller businesses to comply without reducing the required protection standards, especially for high-risk AI systems. While microenterprises are allowed to follow a simplified process for certain parts of the quality management system, they are not exempt from other regulatory obligations . Specifically, they must still comply with the following key Articles from the regulation: Article 9 : Relates to the risk management system, requiring companies to identify and mitigate risks associated with AI systems. Article 10 : Concerns the data and data governance requirements that ensure the quality, relevance, and accuracy of the data used to train and test AI systems. Article 11 : Focuses on ensuring continuous testing and evaluation of AI systems throughout their lifecycle. Article 12 : Involves record-keeping requirements, which obligate businesses to maintain logs related to the operation of high-risk AI systems. Article 13 : Discusses transparency and the obligation to provide adequate information to users and deployers of high-risk AI systems. Article 14 : Requires human oversight to ensure that AI systems are used appropriately, especially in high-risk environments. Article 15 : Establishes requirements for accuracy, robustness, and cybersecurity of AI systems. Articles 72 and 73 : Relate to post-market monitoring and the necessary surveillance activities that ensure ongoing compliance of AI systems after they are placed on the market. European Artificial Intelligence Board The EAI Board is a key governance mechanism outlined in European Union regulations to ensure consistent and effective oversight of AI technologies across Member States. The EAI Board is created to facilitate the coordination and consistency in applying the EU's regulations on artificial intelligence (AI). The Board consists of one representative from each Member State of the EU. Additionally, the European Data Protection Supervisor participates as an observer , and the AI Office attends but does not participate in voting. Other authorities, bodies, or experts from national or EU levels may be invited to meetings on relevant issues, but they do not have voting rights. Each representative is appointed by their Member State for a term of three years , renewable once. These representatives are responsible for ensuring that their country's AI regulations align with the broader EU framework and for coordinating AI activities across national authorities: Representatives must have the skills and authority to contribute to the Board’s work. Each representative is the primary contact for the Board and possibly for national stakeholders, depending on the Member State’s needs. They are responsible for ensuring consistent application of AI regulations within their country and for gathering necessary data to inform the Board’s activities. The Board operates based on rules adopted by a two-thirds majority vote among the representatives. These rules define the procedures for electing the Chair, setting mandates, voting protocols, and organizing the Board’s activities. The Board establishes two standing sub-groups : Market Surveillance : Acts as an Administrative Cooperation Group (ADCO) , overseeing AI systems' compliance and market regulations. Notifying Authorities : Facilitates coordination among authorities responsible for notifying and certifying AI systems. The Board can create additional standing or temporary sub-groups for specific issues, and representatives from the advisory forum can be invited as observers. The Board's primary function is to assist and advise the European Commission and Member States to ensure the consistent and effective application of AI regulations. Key tasks include: Coordination of National Authorities : Promoting cooperation among national bodies responsible for AI regulation. Expertise Sharing : Gathering and distributing technical and regulatory knowledge across Member States, especially in emerging AI areas. Advice on Enforcement : Offering guidance on enforcing AI-related rules, particularly for general-purpose AI models . Harmonization of Practices : Supporting the alignment of administrative procedures, such as the functioning of AI regulatory sandboxes and real-world testing environments. Recommendations and Opinions : The Board can issue recommendations on any aspect of AI regulation, including codes of conduct, standards, and updates to the regulation itself. Promotion of AI Literacy : The Board helps raise awareness of AI risks, benefits, and safeguards among the public and stakeholders. Cooperation with Other Bodies : Working with other EU institutions, agencies, and international organizations to ensure a unified approach to AI regulation. An advisory forum is established to provide technical expertise and advice to the Board and the Commission. Composition : The forum includes a balanced group of stakeholders representing industry (including startups and SMEs), civil society, and academia. It also includes permanent members from EU standardization bodies such as ENISA , CEN , CENELEC , and ETSI . Tasks : The forum advises the Board and the Commission on AI matters and can prepare opinions and recommendations. It meets at least twice a year and may invite experts to its meetings for specific issues. Governance : The forum elects two co-chairs for a two-year term (renewable), and it can create sub-groups to focus on specific topics. The forum also prepares an annual report on its activities, which is publicly available. National Competent Authorities Each Member State is required to designate at least two types of authorities for the purposes of the regulation: A notifying authority : Responsible for overseeing the conformity and compliance of AI systems that are notified or certified under EU law. A market surveillance authority : In charge of monitoring and ensuring AI systems in the market comply with regulations, particularly in relation to safety, health, and standards. These authorities must act independently and impartially, meaning they cannot be influenced by external factors, and they should focus solely on the proper implementation of the regulation. Member States have flexibility in how they organize these authorities. They can appoint multiple authorities to perform these tasks, or consolidate the responsibilities within one or more authorities, depending on their internal organizational needs, as long as they adhere to the principles of independence and objectivity. By August 2, 2025 , Member States are required to make information about these competent authorities and single points of contact publicly available, especially through electronic means . Each Member State must designate a market surveillance authority as the single point of contact for the regulation. This authority will be the central entity responsible for liaising with both the Commission and other stakeholders on AI regulatory matters. The Commission will make this list public , allowing easy access to the designated contact points in each country. By August 2, 2025 , and every two years thereafter, Member States must report to the Commission on the financial and human resources available to their national competent authorities. This reporting includes an assessment of whether those resources are adequate. The Commission will then pass this information to the European Artificial Intelligence Board (EAI Board) for review and possible recommendations on how to address any deficiencies. Market Surveillance and Control of AI Systems in the Union Market Market surveillance authorities (MSAs) must annually report to the European Commission and national competition authorities about AI market activity that may affect competition law. They also report annually on the prohibited practices they encountered and actions taken. For high-risk AI systems linked to products covered by existing EU harmonization laws, the same authorities designated under those laws will act as surveillance authorities. Member States can assign other authorities to manage AI system surveillance, provided they ensure coordination with sectoral authorities. If existing sectoral laws already provide adequate safety and surveillance procedures for certain products (such as medical devices), these procedures will apply, rather than the new AI-specific regulations. Market surveillance authorities are empowered to carry out remote inspections and enforcement actions to ensure compliance with AI regulations, such as accessing data from manufacturers. Surveillance of high-risk AI used by financial institutions falls under the authority of national financial regulators. Other relevant authorities may also be involved, provided coordination is ensured. For banks involved in the Single Supervisory Mechanism, surveillance findings relevant to financial supervision must be reported to the European Central Bank. High-risk AI systems used in sensitive areas like law enforcement or border management must be supervised by data protection authorities or other relevant authorities. The European Data Protection Supervisor is the surveillance authority for EU institutions, except the European Court of Justice when acting in a judicial capacity. Market surveillance authorities and the European Commission can propose joint investigations or activities to promote AI compliance and identify non-compliance across multiple Member States. The AI Office helps coordinate these efforts. Surveillance authorities must have access to the documentation, training data, and validation datasets used to develop high-risk AI systems, possibly through APIs or other technical means, subject to security measures. In specific cases, where necessary for assessing compliance, authorities can request access to the source code of AI systems after other verification methods have been exhausted. Procedure for AI Systems that Present a Risk AI systems presenting a risk are treated as "products presenting a risk" under Article 3, point 19 of Regulation (EU) 2019/1020. These systems are flagged if they endanger health, safety, or fundamental rights. When a national market surveillance authority (MSA) identifies an AI system as risky, they evaluate whether it complies with EU AI regulations. Special attention is given to systems affecting vulnerable groups , and the authority must cooperate with other relevant national bodies, particularly where risks to fundamental rights are involved. If the AI system does not comply , the authority demands corrective actions (e.g., withdrawal, recall, or compliance adjustments), which must happen within 15 working days or sooner if required by harmonized laws. If non-compliance is not limited to one country, the national MSA must notify the European Commission and other EU Member States about the risk and actions being taken. The operator (entity deploying the AI system) is responsible for taking necessary corrective actions across all markets in the EU if an issue is identified. If the operator fails to do so within the prescribed time, the MSA can implement provisional measures such as prohibiting the sale or use of the AI system within the country. When corrective measures are imposed, the MSA must share detailed information about the AI system's risks with the Commission and other Member States, including data about non-compliance, origin, and the supply chain. The notification must specify whether the non-compliance stems from: Prohibited AI practices (e.g., AI systems manipulating behavior). High-risk AI systems failing to meet obligations (covered in Chapter III, Section 2). Failures in meeting standards for presumed compliance. Breaches of transparency requirements. Other national MSAs will share any additional information they have on the AI system and notify the Commission of their own measures. If they disagree with the initial MSA’s actions, they must raise objections . If no objections are raised within three months , the corrective measures are considered justified and enforced across the EU. Special Considerations for AI Systems Misclassified as Non-high-risk If a market surveillance authority believes a system classified as non-high-risk should be considered high-risk, it will evaluate the system based on Annex III (which lists criteria for high-risk AI). If reclassification to high-risk is necessary, the provider is required to take corrective actions to bring the system in compliance with regulations. The market surveillance authority must inform the Commission and other EU Member States of the results if the reclassification impacts AI systems deployed across borders. Providers that intentionally misclassify AI systems to evade high-risk requirements face fines as outlined below. Enforcement of General-Purpose AI Model Obligations The European Commission is the main authority responsible for supervising and enforcing rules related to general-purpose AI models. To handle these tasks, the Commission will delegate responsibilities to a specialized body called the AI Office . This does not interfere with how tasks are divided between the EU and its Member States. If a national market surveillance authority (like a country's consumer safety body) needs help enforcing the AI rules, it can request that the Commission steps in. This is only done if it’s necessary and proportional to the task. The AI Office is responsible for monitoring whether providers of general-purpose AI models are complying with the AI Act. This includes checking if they follow approved codes of practice, which are guidelines they voluntarily agree to follow. Any business or individual that uses a general-purpose AI model (referred to as a “downstream provider”) can file a complaint if they believe the AI model provider has violated the regulations. A valid complaint must: Include the contact details of the AI provider, Provide a clear description of the violation, Offer any additional relevant information to support the claim. The Commission can ask AI providers to provide documentation and information, such as details about how their models are tested for safety and how they comply with regulations. Before formally requesting information, the AI Office may first engage the provider in a structured dialogue to clarify any concerns or gather preliminary information. When the Commission requests information, they must explain: The legal basis for the request, The purpose of the request, What specific information is needed, The deadline for providing the information, and The penalties for providing incorrect or incomplete information. The AI provider must supply the requested information. If the provider is a legal entity (like a corporation), its authorized representative or lawyer can handle the submission, but the provider remains responsible for the accuracy. If the information provided by the AI provider is insufficient, or if the AI model is believed to pose a systemic risk , the AI Office can conduct its own evaluation of the AI model to check compliance with the rules. The Commission can hire independent experts (including those from the scientific panel) to conduct the evaluation on its behalf. The Commission can request technical access to an AI model, such as through its APIs (application programming interfaces) or even access to its source code , in order to perform the evaluation.The request for access must include: The legal basis and reasons for the request, The deadline for providing access, and The penalties for non-compliance. Like with information requests, the AI provider (or its legal representative) must comply with the access request. The Commission will issue further detailed rules on how these evaluations should take place and how independent experts are involved. If necessary, the Commission can ask AI providers to take specific corrective actions, such as: Ensuring compliance with legal obligations, Implementing risk mitigation measures if a serious risk is identified, Removing the AI model from the market if it poses significant risks. If the AI provider offers to implement appropriate measures to mitigate identified risks, the Commission can make these commitments legally binding, and no further action would be necessary. Penalties Member States are responsible for setting penalties and enforcement measures for violations of the regulation by AI operators. These measures can include both monetary and non-monetary penalties (such as warnings). Penalties must be effective , proportionate , and dissuasive . This means they should effectively discourage non-compliance without being unnecessarily harsh. Member States should also consider the impact on SMEs , including startups, ensuring that penalties do not disproportionately harm their economic viability. Member States must notify the European Commission about their penalty rules by the time the regulation comes into effect. Any future changes to these rules must also be promptly communicated to the Commission. Non-compliance with the prohibition of AI practices in Article 5 (covering the prohibited AI practices, explained above) can result in administrative fines of up to EUR 35 million or 7% of the violator's total worldwide annual turnover , whichever is higher. Non-compliance with other obligations (e.g., transparency, obligations of providers or distributors, etc.) can lead to fines up to EUR 15 million or 3% of total worldwide turnover, whichever is higher . Supplying incorrect, incomplete, or misleading information to authorities or notified bodies may result in fines up to EUR 7.5 million or 1% of global annual turnover . For SMEs (including startups), the maximum fines are either a percentage of their turnover or a set amount (whichever is lower). When deciding on fines, authorities will consider factors such as: The nature, gravity, and duration of the infringement. Whether the infringement affected people and to what extent. Whether the operator cooperated with authorities to remedy the issue. The economic benefit gained from non-compliance. The intentional or negligent character of the violation. This ensures that penalties are tailored to the specific context of the violation. Depending on the legal system, Member States may allow fines to be imposed by national courts or other bodies. The mechanism used must have the same effect as the fines imposed under this regulation. * * * Prokopiev Law Group offers expert guidance on AI regulatory landscape, ensuring full compliance with complex frameworks like the EU AI Act. With our experience and a global network of partners, we help clients meet AI compliance requirements in every jurisdiction, avoiding costly fines and operational setbacks. Whether you are developing or deploying AI systems across borders, our firm has the expertise to advise on regulatory obligations, from data protection to AI risk management. Contact us today to ensure your business is fully compliant and prepared for the future of AI regulation. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- Regulation on Markets in Crypto Assets (MiCAR) Implementation
The Regulation on Markets in Crypto Assets (MiCAR) is set to reach a significant milestone on 30 June 2024, with the provisions concerning stablecoins coming into effect. This brief explores the recent updates regarding Level 2 and Level 3 measures under MiCAR. MiCAR Overview MiCAR, an EU Level 1 legislative measure, establishes and harmonizes the regulatory framework for issuers and offerors of crypto-assets and crypto-asset service providers (CASPs). This regulation is directly effective across the European Union and fills the regulatory gaps not covered by existing EU financial services regimes. Implementation of MiCAR The implementation of MiCAR involves multiple EU Level 2 and Level 3 legislative measures, including Regulatory Technical Standards (RTS), Implementation Technical Standards (ITS), and Guidelines. Level 2 Measures: MiCAR authorizes the European Commission to issue delegated acts autonomously. Additionally, it mandates the European Banking Authority (EBA) and the European Securities and Markets Authority (ESMA), sometimes in collaboration with the European Central Bank (ECB), to develop RTS and ITS for subsequent adoption by the European Commission. These standards provide detailed requirements for the effective application of MiCAR. Level 3 Measures: Level 3 measures encompass the development of Guidelines by EBA and ESMA. These Guidelines add clarity and direction to specific aspects of MiCAR. Key Regulatory Bodies and Their Roles European Commission: Empowered to create delegated acts and adopt RTS and ITS developed by EBA and ESMA. European Banking Authority (EBA): Responsible for drafting RTS and ITS, particularly in areas requiring technical expertise and financial oversight. European Securities and Markets Authority (ESMA): Shares responsibility with EBA in developing technical standards and producing Guidelines. European Central Bank (ECB): Collaborates with EBA and ESMA where necessary, particularly on matters impacting financial stability and the broader economic environment. European Commission Delegated Regulations Supplementing MiCAR On 30 May 2024, the European Commission published several Delegated Regulations in the Official Journal, which supplement MiCAR. These regulations specify various operational and procedural aspects for the oversight and regulation of crypto-assets, particularly significant asset-referenced tokens (ARTs) and e-money tokens (EMTs). The regulations will become effective on 19 June 2024. Specific Delegated Regulations Commission Delegated Regulation (EU) 2024/1503: it outlines the fees charged by the European Banking Authority (EBA) to issuers of significant ARTs and EMTs. Commission Delegated Regulation (EU) 2024/1504: it details the procedural rules for the EBA's authority to impose fines or periodic penalty payments on issuers of significant ARTs and EMTs. Commission Delegated Regulation (EU) 2024/1506: it specifies criteria for classifying ARTs and EMTs as significant. The criteria include factors such as market size, transaction volume, and systemic importance. Commission Delegated Regulation (EU) 2024/1507: it outlines the criteria and factors to be considered by the European Securities and Markets Authority (ESMA), the EBA, and competent national authorities (e.g., the Central Bank of Ireland) in relation to their intervention powers. EBA Final Reports On 7 May 2024, the European Banking Authority (EBA) published four final reports detailing regulations for market access by issuers of asset-referenced tokens (ARTs) and those seeking significant influence through qualifying holdings. Final Reports on Market Access 1. RTS on information required for applicants seeking authorisation to offer and trade ARTs: The Regulatory Technical Standards (RTS) specify the information required from applicants seeking authorization to offer and trade ARTs. Notably, the RTS clarify that: Applicants must be legal entities or undertakings established within the EU. The authorization pertains only to public offerings or admissions to trading, not to issuance itself. Only issuers can apply for and be granted authorization. 2. ITS on information required for authorisation application: The Implementing Technical Standards (ITS) provide further details on the information requirements for authorization applications, including standardized forms, templates, and procedural guidelines. 3. RTS on information for assessment of a proposed acquisition of qualifying holdings in issuers of ARTs: These RTS outline the information for assessing proposed acquisitions of qualifying holdings in ART issuers. Required information includes: Identity and background of the acquirer. Financial soundness and past convictions of the acquirer. The acquirer's management body must have good repute, knowledge, skill, and experience. 4. RTS on the approval process for white paper of ARTs issued by credit institutions: These RTS harmonizes the approval process for white papers issued by credit institutions. Governance, Conflicts of Interest, and Remuneration Reports On 6 June 2024, the EBA released three final reports addressing governance, conflicts of interest, and remuneration policies for issuers under MiCAR. 1. Guidelines on the minimum content of the governance arrangements for issuers of ARTs: The guidelines specify the minimum content for governance arrangements, emphasizing proportionality and sound risk management, including risks related to money laundering, fraud, cyber threats, and compliance. 2. RTS on Remuneration Policies: These RTS define the main governance processes and policy elements for the remuneration of significant ART issuers and electronic money institutions. 3. RTS on Conflicts of Interest: The RTS provide detailed policies and procedures for identifying, preventing, managing, and disclosing conflicts of interest, particularly those related to asset reserves. They align with frameworks under Directive 2014/65/EU (MiFID) and Directive 2013/36/EU (CRD), tailored for ART issuers. Prudential Requirements Reports On 13 June 2024, the EBA published six further reports covering own funds, liquidity, and recovery plans. 1. Guidelines on Recovery Plans: These guidelines specify the format and content of recovery plans, including governance, recovery options, and communication strategies. 2. RTS on Liquidity Management: These RTS outline the content and procedures for liquidity management policies, drawing from Basel Standards and adapting them to the crypto-asset context. 3. RTS on Highly Liquid Instruments: These RTS identify financial instruments with minimal market, credit, and concentration risks, incorporating standards from the UCITS Directive and LCR Delegated Regulation. 4. RTS on Liquidity Requirements for Reserve Assets: These standards specify the liquidity requirements for reserve assets, considering international regulatory frameworks and reports on crypto activities. 5. RTS on Own Funds Adjustment Procedure: These RTS detail the procedures and timeframes for adjusting own funds to 3% of the average reserve assets for significant ART issuers, as outlined in MiCAR Articles 43 and 44. 6. RTS on Stress Testing and Own Funds Requirements: These RTS provide criteria for competent authorities to assess the need for issuers to increase own funds, applying to both ART and EMT issuers. Next Steps The EBA’s draft RTS will come into force 20 days after publication in the Official Journal of the European Union. The Guidelines will apply two months after the publication of all translations on the EBA website. ESMA Final Reports on MiCAR Implementation First Final Report on MiCAR (25 March 2024) On 25 March 2024, ESMA released its first final report on MiCAR, focusing on several key areas to ensure comprehensive regulatory oversight and investor protection. The report includes proposals on: 1. CASP Authorisation: The report outlines the information requirements for CASPs seeking authorization to operate within the EU. This includes criteria that CASPs must meet to obtain and maintain their licenses, ensuring they comply with the necessary regulatory standards. 2. Notification by Financial Entities: Financial entities intending to provide crypto-asset services must notify their intent. The report specifies the notification process, ensuring that these entities provide all necessary information to the relevant authorities before commencing operations. 3. Acquisition of Qualifying Holdings: The report details the assessment criteria for the intended acquisition of qualifying holdings in a CASP. This includes evaluating the financial soundness, reputation, and suitability of the acquirer to maintain the integrity and stability of the crypto-asset market. 4. Complaint Handling by CASPs: The report proposes requirements for CASPs to effectively address and resolve complaints from investors and consumers. Second Final Report on MiCAR (31 May 2024) On 31 May 2024, ESMA published its second final report on MiCAR, focusing on rules concerning conflicts of interest for CASPs. This report includes Regulatory Technical Standards (RTS) to provide a clear framework for identifying, managing, and disclosing conflicts of interest. 1. Conflicts of Interest Policies and Procedures: The RTS set forth requirements for the policies and procedures CASPs must implement to identify, prevent, manage, and disclose conflicts of interest. These requirements take into account the scale, nature, and range of crypto-asset services provided by CASPs, ensuring that all potential conflicts are adequately addressed. 2. Disclosure Methodology: The report outlines the methodology for the content of conflict of interest disclosures. This includes specific details on how CASPs should disclose conflicts to ensure transparency and inform investors and stakeholders about potential issues. Prokopiev Law Group provides extensive legal support to ensure your compliance with MiCAR and other global regulations. Our expertise spans key crypto jurisdictions, including the EU, the US, Singapore, and Hong Kong. We are well-versed in navigating complex regulatory landscapes, covering areas such as CASP authorization, conflict of interest management, and liquidity requirements. With our global network of partners, we ensure your project is compliant worldwide. Contact us for tailored advice on developing a legal strategy for your Web3 project. For more information, write to us today. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- Regulation on Artificial Intelligence in the European Union
The European Union has enacted a regulation on artificial intelligence (AI) designed to stimulate innovation, ensure the trustworthiness of AI systems, and safeguard fundamental rights (the Regulation or the AI Act). This Regulation provides standardized rules and responsibilities for providers, deployers, and users of AI systems within the EU. It also extends to third-country entities whose AI systems impact the EU market or individuals within the EU. Additionally, the Regulation establishes governance structures, enforcement mechanisms, and penalties for non-compliance at both EU and national levels. Legal Basis and Scope The AI Act is established on the foundation of Articles 16 and 114 of the Treaty on the Functioning of the European Union (TFEU). It aims to improve the internal market by creating a legal framework specifically for the development, market placement, and usage of artificial intelligence (AI) systems within the Union. Uniform Legal Framework AI systems can be deployed across various sectors and regions and easily circulate throughout the Union. Diverging national rules can fragment the internal market and reduce legal certainty for operators. Therefore, the AI Act ensures a consistently high level of protection across the Union, promoting trustworthy AI while preventing obstacles to free circulation, innovation, deployment, and uptake of AI systems. Complementarity with Existing Laws The Regulation complements Union laws on data protection, consumer protection, fundamental rights, employment, and product safety. It does not affect the rights and remedies such acts provide, including compensation for damages and social policy laws related to employment and working conditions. Exclusions AI systems developed solely for scientific research and development are excluded from the Regulation's scope until market placement or service provision. Additionally, AI systems for military defense or national security purposes are excluded. However, if these systems are used for civilian purposes, they must comply with the AI Act. Data Protection Compliance The Regulation complements existing data protection laws, ensuring AI systems processing personal data adhere to the General Data Protection Regulation (GDPR) and other relevant regulations. It does not seek to alter the application of existing Union laws governing personal data processing but rather facilitates the effective implementation and exercise of data subjects' rights and remedies. Third-Country Entities The Regulation applies to AI systems that are not placed on the market within the European Union but whose outputs are utilized within the Union. This includes scenarios where: Contractual Agreements: An operator based in the EU contracts services involving AI systems from an operator established in a third country. The AI system processes data lawfully collected within the EU and transfers the output back to the EU operator for utilization within the Union. Impact on Individuals: The AI Act applies to AI systems used in a third country that produce outputs affecting individuals within the EU, regardless of the system's physical location or the operator's establishment. The Regulation does not apply to public authorities of third countries or international organizations when acting within the framework of cooperation or international agreements concluded at the Union or national level for law enforcement and judicial cooperation. These entities are exempted provided they offer adequate safeguards for the protection of fundamental rights and freedoms. This includes: Bilateral Agreements: Agreements established between Member States and third countries or between the EU, its agencies, and international organizations. Adequate Safeguards: The relevant authorities assess whether these agreements include sufficient safeguards for the protection of fundamental rights and freedoms. Prohibited AI Practices 1. Manipulative Techniques AI systems that employ subliminal components or other manipulative techniques designed to distort human behavior in a manner that causes or is likely to cause significant harm are strictly prohibited. These manipulative techniques include but are not limited to, the use of stimuli beyond human perception to nudge individuals towards specific behaviors, significantly impairing their autonomy, decision-making, and free choice. 2. Exploitation of Vulnerabilities AI systems that exploit the vulnerabilities of specific groups due to their age, disability, or social and economic conditions, resulting in behaviors that materially distort their actions and cause significant harm, are banned. This includes AI systems that exploit individuals' lack of understanding or capacity to resist specific influences, leading to detrimental outcomes. 3. Social Scoring by Public Authorities AI systems utilized by public authorities for social scoring, which leads to discriminatory outcomes or unjustly limits individuals' access to essential services, are prohibited. For example, systems that evaluate or classify individuals based on their social behavior, personal characteristics, or predicted behavior across various contexts, resulting in detrimental treatment unrelated to the original data context. 4. Remote Biometric Identification in Public Spaces for Law Using real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes is generally prohibited. Exceptions are strictly limited to narrowly defined situations where such use is necessary to achieve a substantial public interest that outweighs the risks. These situations include: Locating or identifying missing persons, including victims of crime. Preventing imminent threats to life or physical safety, such as terrorist attacks. Identifying perpetrators or suspects of serious criminal offenses listed in an annex to the AI Act, where the offense is punishable by a custodial sentence of at least four years. The use of such systems must be subject to prior judicial or independent administrative authorization, except in cases of urgency where obtaining prior authorization is impractical. In such urgent cases, the use must be limited to the minimum necessary duration, and the reasons for not obtaining prior authorization must be documented and submitted for approval as soon as possible. 5. Biometric Categorization and Emotion Recognition AI systems used for biometric categorization, which assign individuals to specific categories based on biometric data, are prohibited if they result in discrimination or harm fundamental rights. Additionally, AI systems intended for emotion recognition in sensitive contexts such as workplaces or educational settings are banned due to their potential for misuse and the significant privacy risks involved. Risk Assessment and Mitigation Providers and deployers of AI systems must conduct risk assessments to ensure their systems do not fall into the prohibited categories. This includes evaluating the potential impact on individuals' autonomy, decision-making, and fundamental rights. Transparency and accountability measures must be in place to ensure compliance with these prohibitions, including maintaining documentation of AI system design, development, and deployment processes, allowing for effective monitoring and enforcement by relevant authorities. High-Risk AI Systems 1. General Criteria for Classification of High-Risk AI Systems An AI system is classified as high-risk if it meets specific conditions relating to safety components and conformity assessments. These conditions are detailed with reference to the Union harmonization legislation listed in Annex I of the Regulation. The legislation includes: Regulation (EC) No 300/2008: Concerning the safety and security of civil aviation. Regulation (EU) No 167/2013: Regarding the approval and market surveillance of agricultural and forestry vehicles. Regulation (EU) No 168/2013: Relating to the approval and market surveillance of two- or three-wheel vehicles and quadricycles. Directive 2014/90/EU: On marine equipment, ensuring the compliance of equipment used on EU ships. Directive (EU) 2016/797: On the interoperability of the rail system within the European Union. Regulation (EU) 2018/858: On the approval and market surveillance of motor vehicles and their trailers, and systems, components, and separate technical units intended for such vehicles. Regulation (EU) 2018/1139: Establishing common rules in the field of civil aviation and establishing a European Union Aviation Safety Agency. Regulation (EU) 2019/2144: On type-approval requirements for motor vehicles and their trailers, and systems, components, and separate technical units intended for such vehicles, with a focus on general safety and the protection of vehicle occupants and vulnerable road users. 2. Additional Criteria In addition to the criteria mentioned above, AI systems listed in Annex III are also classified as high-risk. These systems include those used in: Biometrics: Remote biometric identification systems, biometric categorization, and emotion recognition systems. Critical Infrastructure: AI systems used in the management and operation of critical digital infrastructure, road traffic, and the supply of water, gas, heating, or electricity. Education and Vocational Training: Systems determining access or admission to educational institutions, evaluating learning outcomes, and monitoring prohibited behavior during tests. Employment and Workforce Management: AI systems used for recruitment, selection, monitoring, and performance evaluation of employees. Essential Services and Benefits: Systems used by public authorities for evaluating eligibility for public assistance, creditworthiness, risk assessment in life and health insurance, and emergency response services. 3. Exemptions An AI system will not be considered high-risk if it does not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons, including not materially influencing the outcome of decision-making. This applies under specific conditions: The AI system is intended to perform a narrow procedural task. It is designed to improve the result of a previously completed human activity. It detects decision-making patterns or deviations without replacing or influencing the human assessment. It performs a preparatory task relevant to the assessment purposes listed in Annex III. However, AI systems referred to in Annex III that perform profiling of natural persons are always considered high-risk. Providers who consider their AI systems, listed in Annex III, as not high-risk must document their assessment before placing the system on the market. These providers are subject to the registration obligation set out in Article 49(2). Upon request, they must provide the assessment documentation to national competent authorities. Compliance and Enforcement 1. General Obligations Providers of high-risk AI systems must ensure that their systems comply with the requirements set out in the AI Act before they are placed on the market or put into service. These obligations include: Risk Management System: Providers must establish and implement a risk management system that identifies, analyzes, and mitigates risks associated with the AI system throughout its lifecycle. This includes both pre-market and post-market activities. Quality Management System: Providers must establish a quality management system that ensures the AI system consistently meets the requirements of the Regulation. This system must include documented policies and procedures for design, development, testing, and monitoring. Technical Documentation: Providers must prepare and maintain detailed technical documentation for each AI system. This documentation must include information on the system's design, development, testing, and risk management measures. Conformity Assessment: Providers must ensure that the AI system undergoes the appropriate conformity assessment procedure before it is placed on the market or put into service. This includes ensuring that the system meets all applicable requirements and standards. Post-Market Monitoring: Providers must establish and maintain a post-market monitoring system to continuously assess the AI system's performance and safety. This includes collecting and analyzing data on the system's operation and any incidents or malfunctions. 2. Specific Requirements Providers must also ensure compliance with the following specific requirements for high-risk AI systems: Human Oversight: Providers must design AI systems to enable effective human oversight, ensuring that individuals can intervene in the system's operation and prevent or mitigate potential harm. Accuracy, Robustness, and Cybersecurity: Providers must ensure that the AI system is accurate, robust, and secure. This includes implementing measures to protect the system from cybersecurity threats and ensuring that it can withstand foreseeable operating conditions. Transparency and Traceability: Providers must ensure that the AI system operates transparently, providing clear information on its capabilities, limitations, and decision-making processes. This includes maintaining detailed records to ensure traceability and accountability. Data Governance: Providers must implement data governance measures to ensure the quality and integrity of data used by the AI system. This includes procedures for data collection, storage, and processing, as well as measures to protect data privacy and security. 3. Obligations of Importers Importers must ensure that AI systems they place on the market comply with the requirements of the AI Act. This includes: Verification of Conformity: Importers must verify that the provider has conducted the appropriate conformity assessment procedure and that the AI system meets all applicable requirements. Technical Documentation and Information: Importers must ensure that the provider has prepared the necessary technical documentation and made it available upon request by national authorities. Post-Market Monitoring and Reporting: Importers must monitor the performance of AI systems they place on the market and report any incidents or non-compliance to the relevant national authorities. Contact Information: Importers must include their name, registered trade name or trademark, and contact address on the AI system or its packaging, ensuring that end-users and authorities can easily identify and contact them. Storage and Transport: Importers must ensure that the AI system is stored and transported under conditions that do not affect its compliance with the requirements of the AI Act. 4. Obligations of Distributors Distributors must verify that the AI systems they make available on the market comply with the requirements of the AI Act. This includes: Verification of Compliance: Distributors must verify that the provider and importer have fulfilled their obligations under the Regulation, including the completion of the conformity assessment procedure and the availability of technical documentation. Information to Authorities: Distributors must provide relevant information to national authorities upon request and cooperate with them to ensure compliance with the AI Act. Storage and Transport: AI systems are stored and transported in conditions that do not affect their compliance with the requirements of the Regulation. Post-Market Monitoring: Distributors must participate in post-market monitoring activities and report any incidents or non-compliance to the relevant national authorities. Penalties The Regulation mandates that Member States establish penalties for non-compliance that are effective, proportionate, and dissuasive. Specific measures include: 1. Fines Non-compliance with the prohibition of AI practices referred to in Article 5 shall result in administrative fines of up to 35,000,000 EUR or, if the offender is an undertaking, up to 7% of its total worldwide annual turnover for the preceding financial year, whichever is higher. Non-compliance with other provisions related to operators or notified bodies (excluding those laid down in Article 5) shall be subject to administrative fines of up to 15,000,000 EUR or, if the offender is an undertaking, up to 3% of its total worldwide annual turnover for the preceding financial year, whichever is higher. This includes obligations under: Article 16 (obligations of providers), Article 22 (obligations of authorised representatives), Article 23 (obligations of importers), Article 24 (obligations of distributors), Article 26 (obligations of deployers), Articles 31, 33(1), 33(3), 33(4), or 34 (requirements and obligations of notified bodies), Article 50 (transparency obligations for providers and users). Supplying incorrect, incomplete, or misleading information to notified bodies or national competent authorities in response to a request shall result in administrative fines of up to 7,500,000 EUR or, if the offender is an undertaking, up to 1% of its total worldwide annual turnover for the preceding financial year, whichever is higher. 2. Suspension or Withdrawal In cases of serious non-compliance, Member States may suspend or withdraw AI systems from the market to prevent further infractions and mitigate any ongoing risks. 3. Corrective Actions Providers of non-compliant AI systems may be required to undertake mandatory corrective actions to ensure conformity with the AI Act. This may involve updating system functionalities, revising operational processes, or enhancing data protection measures. Formal Non-Compliance Measures: The market surveillance authority of a Member State may mandate providers to address formal non-compliances such as improper CE marking, incorrect EU declaration of conformity, failure to register in the EU database, lack of an authorized representative, and unavailability of technical documentation. Persistent non-compliance can lead to further restrictions, prohibition, recall, or withdrawal of the high-risk AI system from the market. 4. Union AI Testing Support Structures The Commission designates Union AI testing support structures to provide independent technical or scientific advice to market surveillance authorities. Remedies The Regulation ensures that individuals and entities affected by non-compliant AI systems have access to appropriate remedies, which include: 1. Complaints Any natural or legal person who believes there has been an infringement of the Regulation can submit reasoned complaints to the relevant market surveillance authority. These complaints must be considered in the course of market surveillance activities and handled according to established procedures. 2. Judicial Redress Affected individuals have the right to seek judicial redress for damages caused by non-compliant AI systems. This includes the right to obtain clear and meaningful explanations from the deployer of high-risk AI systems when a decision significantly affects their health, safety, or fundamental rights. 3. Right to Explanation Individuals significantly affected by decisions based on high-risk AI systems listed in Annex III, with certain exceptions, are entitled to an explanation of the role of the AI system in the decision-making process and the main elements of the decision taken. Protection of Whistleblowers Persons reporting infringements of the Regulation are protected under Directive (EU) 2019/1937, ensuring they are safeguarded when reporting such violations European Artificial Intelligence Board The European Artificial Intelligence Board (the Board) is established to support the consistent application of the AI Regulation across the Union. The Board comprises representatives from: National supervisory authorities responsible for the implementation of the Regulation. The European Data Protection Supervisor. The European Commission, which chairs the Board. The Board's primary responsibilities include: Advising and Assisting the Commission: The Board advises and assists the European Commission in matters related to AI regulation, including providing opinions and recommendations. Promoting Cooperation: The Board promotes cooperation between national supervisory authorities to ensure consistent application and enforcement of the AI Act across Member States. Issuing Guidelines and Recommendations: The Board issues guidelines, recommendations, and best practices to facilitate the implementation of the Regulation, ensuring a harmonized approach to AI governance. Facilitating Exchange of Information: The Board facilitates the exchange of information among national authorities, enhancing the effectiveness of supervision and enforcement actions. The Board operates based on internal rules of procedure, which detail its functioning, including decision-making processes and meeting schedules. The rules of procedure are adopted by a simple majority vote of the Board members. The Board may establish subgroups to address specific issues or tasks. These subgroups are composed of Board members or external experts as needed. The establishment of subgroups must be approved by the Board. National Supervisory Authorities Each Member State must designate one or more national supervisory authorities responsible for monitoring the application of the AI Act. The responsibilities of national supervisory authorities include: Monitoring and Enforcement: Ensuring that AI systems placed on the market or put into service in their jurisdiction comply with the Regulation. Investigations and Inspections: Conducting investigations and inspections to verify compliance, including the power to access premises and documents. Handling Complaints: Receiving and handling complaints from individuals and entities regarding potential non-compliance with the AI Act. Imposing Penalties: Imposing administrative penalties and corrective measures for non-compliance, as outlined in the Regulation. National supervisory authorities must operate independently and be free from external influence. Member States must ensure that these authorities have adequate resources, including financial, technical, and human resources, to effectively perform their duties. * * * For more information on how the AI Regulation can ensure compliance and foster innovation within the web3 landscape, please reach out to us. Prokopiev Law Group, with its broad global network of partners, ensures your compliance worldwide. Popular legal inquiries in the web3 sector include regulatory compliance for decentralized finance (DeFi), NFT marketplaces, and blockchain gaming platforms. Our team is well-equipped to address these complexities and provide tailored legal solutions to navigate the evolving regulatory environment of web3 technologies. Contact us to ensure your web3 projects align with current legal standards and maximize their potential within the global market. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- Generative AI and EUDPR Compliance
The EDPS has issued Orientations on generative AI and personal data protection to provide guidance to EU institutions, bodies, offices, and agencies (EUIs) on processing personal data using generative AI systems. These guidelines aim to ensure compliance with Regulation (EU) 2018/1725 (EUDPR). Although the Regulation does not explicitly mention AI, it is essential to interpret and apply data protection principles to safeguard individuals' fundamental rights and freedoms. Definition of Generative AI Generative AI, a subset of artificial intelligence, uses machine learning models to produce various outputs such as text, images, and audio. These models, known as foundation models, serve as the core architecture for more specialized models fine-tuned for specific tasks. Foundation models are trained on extensive datasets, including publicly available information, and can handle complex structures like language, images, and audio. Large language models (LLMs) are specific foundation models trained on vast amounts of text data to generate natural language responses. Applications of generative AI include code generation, virtual assistants, content creation, language translation, speech recognition, medical diagnosis, and scientific research tools. Use of Generative AI by EUIs EUIs can develop, deploy, and use generative AI systems for public services, provided they comply with applicable legal requirements and ensure respect for fundamental rights and freedoms. The Regulation applies fully to personal data processing activities involving generative AI, irrespective of the technologies used. EUIs may use generative AI solutions developed internally or procured from external providers. In such cases, they must determine the specific roles (controller, processor, joint controllership) for processing operations and their implications under the Regulation. Transparency, ethical development, and adherence to a risk-based approach are essential to ensure trustworthy AI. Identifying Personal Data Processing in Generative AI Systems Personal data processing can occur at various stages in the lifecycle of a generative AI system, including dataset creation, training, inference, and user interactions. Developers or providers must ensure that personal data is not processed, mainly if anonymized or synthetic data is used. The EDPS cautions against web scraping for data collection, as it may violate data protection principles. Role of Data Protection Officers (DPOs) Article 45 of the Regulation outlines the tasks of DPOs, including advising on data protection obligations, monitoring internal compliance, and acting as a contact point for data subjects and the EDPS. In the context of generative AI, DPOs must understand the system's lifecycle, including data processing mechanisms, decision-making processes, and the impact on individuals' rights. They should also advise on Data Protection Impact Assessments (DPIAs) and ensure transparency and documentation of processing activities. Conducting DPIAs for Generative AI Systems A DPIA is required before processing operations that likely involve high risks to individuals' rights and freedoms, particularly when using new technologies like generative AI. The DPIA should assess risks, document mitigation actions, and ensure data protection compliance by design and default principles. Controllers must consult the EDPS if reasonable measures cannot mitigate risks. Lawfulness of Personal Data Processing The processing of personal data in generative AI systems must be based on one of the lawful grounds listed in the Regulation. For special categories of data, an exception under the Regulation must apply. Legal grounds include performing tasks in the public interest or complying with legal obligations. Consent may be used but must meet specific legal requirements. EUIs must ensure that providers comply with data protection principles, especially when using legitimate interest as a legal basis. Principle of Data Minimization Data minimization requires that personal data processing is limited to what is necessary for the purposes. This principle applies throughout the lifecycle of the AI system. EUIs must use high-quality, well-curated datasets and implement technical procedures to minimize data use. Data Accuracy Data controllers must implement measures to ensure data accuracy, including verifying dataset content, regular monitoring, and human oversight. Contractual assurances from third-party providers on data accuracy procedures are necessary. Despite efforts, generative AI systems may still produce inaccurate results, necessitating careful data accuracy assessment. Informing Individuals about Data Processing EUIs must provide clear and comprehensive information to individuals about personal data processing in generative AI systems. This includes details about data sources, processing activities, and the logic of automated decisions. Transparency policies help mitigate risks and ensure compliance. Data protection notices should be regularly updated to reflect changes in data processing activities. Automated Decision-Making Generative AI systems may involve automated decision-making, requiring compliance with Article 24 of the Regulation. EUIs must ensure safeguards for individuals, including the right to human intervention, to express their views, and to contest decisions. The use of AI in decision-making must be carefully considered to avoid unfair, unethical, or discriminatory outcomes. Ensuring Fair Processing and Avoiding Bias Bias in generative AI systems can arise from training data, algorithms, or developers. Biases can lead to unfair processing and discrimination, affecting individuals' rights and freedoms. EUIs must ensure datasets are representative and implement accountability mechanisms to monitor and correct biases. Regular testing and validation help identify and mitigate bias. Exercising Individual Rights Generative AI systems present challenges for exercising individual rights, such as access, rectification, erasure, and objection. Proper dataset management and traceability support the exercise of these rights. Data minimization techniques can mitigate risks associated with managing individual rights. EUIs must implement measures to ensure the effective exercise of individual rights throughout the AI system lifecycle. Data Security Generative AI systems may pose unique security risks, requiring specific controls and continuous monitoring. EUIs must implement technical and organizational measures to ensure data security, including regular risk assessments and updates. Security measures should address known vulnerabilities and evolving threats. Conclusion The EDPS Orientations provide a framework for EUIs to develop, deploy, and use generative AI systems while ensuring compliance with data protection principles under the Regulation. Adherence to data protection by design and by default, transparency, accountability, and continuous monitoring are essential to safeguard individuals' rights and freedoms. Prokopiev Law Group is well-equipped to ensure your compliance with evolving Web3 regulations, leveraging our extensive global network of partners. We offer expert guidance on issues such as decentralized finance (DeFi) regulations, NFT legal frameworks, smart contract governance, and cross-border crypto-asset reporting standards. Please contact us for comprehensive advice on navigating the complex regulatory landscape of Web3, including matters like the FATF Travel Rule, MiCA in the EU, and on-chain dispute resolution mechanisms. Our expertise spans worldwide jurisdictions, ensuring compliance wherever your operations are based. Please write to us for tailored solutions to your Web3 legal needs. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
- Marketing Guidelines for Crypto Entrepreneurs
A web3 venture involves following a complex set of crypto-marketing rules globally. Founders are often not fully prepared for the regulatory challenges. These guidelines will help mitigate common issues and avoid major risks. This document is not legal advice and does not cover all aspects, but it clearly explains the main rules to follow. Our previous article covered one side of crypto marketing. This document offers more detailed material to guide your marketing efforts further. General Recommendations Ensure that all information provided is honest and easy to understand. Avoid using complex language; instead, present information in a straightforward manner. Tailor your messages to suit the knowledge level of your audience. Provide enough information for informed decision-making, and never hide critical details. Explain Risks Clearly Always present a balanced view of risks and potential returns. When discussing returns, include the associated risks. Do not downplay the risks of dealing with cryptoassets, as transparency is crucial. Avoid Exaggeration Refrain from making unrealistic claims. All assertions must be backed by verifiable evidence to maintain credibility and trustworthiness. Include All Fee Information Clearly state all costs, fees, and charges. If there are complex fee structures, provide detailed information to ensure complete transparency. Use Accurate and Current Information Ensure that all facts, figures, and statements are up-to-date and correct. Avoid using misleading graphics or images, and always include a publication date with any piece of information. Use of Terms like "Guaranteed" or "Secure" These terms should only be used if they are accurate and verified. Provide all necessary information to explain these terms clearly to avoid any misunderstandings. Highlight Informational Nature Always clarify that the information provided is for informational purposes. Make it clear that users should perform their research or consult with a financial advisor before making any decisions. Avoiding Financial Terminology When a project involves financial activities, investment implications, or similar elements, and a project is uncertain about the appropriate jurisdiction for licensing or registration, it is crucial to avoid language that could trigger the application of financial or securities laws. To ensure compliance, avoid the following terms and phrases: Investment Advice Avoid terms like "investment advice," "investment strategy," or "investment recommendations." Instead, use "informational service" or "educational content." This ensures the information is understood to be for informational purposes only. Financial Planning Refrain from using terms like "financial planning," "wealth management," or "financial strategy." Use phrases like "general financial education" or "financial literacy content" to avoid the implications of personalized financial planning. Securities Implications Do not use language that suggests the offering of securities, such as "equity," "shares," or "dividends." Instead, describe your offerings as "digital assets" or "utility tokens," if applicable, making it clear that they do not confer ownership or profit-sharing rights. Guaranteed Returns Avoid statements that imply guaranteed returns or risk-free investments, such as "guaranteed profit," "risk-free investment," or "secure returns." Use disclaimers and emphasize risks. Personalized Recommendations Do not provide specific actions for individual users, such as "you should invest in this" or "this is the best option for you." Offer general information applicable to a broad audience, like "explore various options based on general criteria" or "consider different strategies based on risk tolerance." Financial Terminology in Marketing Ensure that marketing materials do not include terms that could be interpreted as financial promises or advice. Avoid phrases like "maximize your investment" or "secure your financial future." Stick to neutral language focusing on education and information. Inducements Avoid creating inducements when communicating with users. Inducements are steps that persuade or encourage someone to engage in specific activities. Refrain from using high-pressure sales tactics that force users into making quick decisions, such as, "Hurry, invest now, or miss out forever!" Always consider whether your message significantly encourages clients. When in doubt, avoid language that directly invites or strongly persuades someone without appropriate disclaimers. Promoted Materials Clearly label all promotional content with "Sponsored" or "Advertisement" to inform viewers of the endorsement's nature. Influencers Influencers should always disclose paid relationships or conflicts of interest. They must ensure that endorsements are truthful and not misleading. They should disclose that the endorsement is part of a paid partnership. Profit and Value Statements Do not state that a native token will share any profits or upside. Avoid discussing how a native token could increase in value. Do not promise profits. Discourage Speculative Behavior Do not encourage "buy low, sell high" behavior. Do not promise any value growth, even indirectly. Stick to Facts Only share objective, factual information when a user inquires. Use neutral, informative language. Avoid speculative statements or any language that could be interpreted as promoting an investment. Conclusion Adhering to these marketing guidelines will help crypto entrepreneurs navigate the complex regulatory environment and foster trust with their audience. Of course, these guidelines are not exhaustive and cover only several core aspects, but they can still be helpful as basic rules to adhere to. Always remember to consult with legal and financial professionals for comprehensive compliance and to stay updated with the evolving regulations in the crypto space. Prokopiev Law Group has a broad global network of partners, ensuring your compliance worldwide. For more information, write to us, and we'll assist you in staying ahead in the dynamic world of Web3 and crypto regulations. The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.