The Commission Guidelines on prohibited artificial intelligence practices established by Regulation (EU) 2024/1689 (AI Act) (“the Guidelines”) were officially published on 04 February 2025. They provide an interpretation of the practices banned by Article 5 AI Act. These Guidelines are non-binding but form a crucial reference for providers, deployers, and authorities tasked with implementing the AI Act’s rules.
SCOPE, RATIONALE, AND ENFORCEMENT
Scope of the Guidelines
“(1) Regulation (EU) 2024/1689 of the European Parliament and the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)(‘the AI Act’)1 entered into force on 1 August 2024. The AI Act lays down harmonised rules for the placing on the market, putting into service, and use of artificial intelligence(‘AI’) in the Union.” (Section (1) of the Guidelines)
"(5) These Guidelines are non-binding. Any authoritative interpretation of the AI Act may ultimately only be given by the Court of Justice of the European Union (‘CJEU’).” (Section (5) of the Guidelines)
According to section (1) of the Guidelines, the AI Act follows a risk-based approach, classifying AI systems into four risk categories: unacceptable risk, high risk, transparency risk, and minimal/no risk.
Article 5 AI Act deals exclusively with “AI systems posing unacceptable risks to fundamental rights and Union values” (section (2) of the Guidelines).
Additionally, the Guidelines clarify in section (6) that they are “regularly reviewed in light of the experience gained from the practical implementation of Article 5 AI Act and technological and market developments.” Their material scope and addressees are set out in sections (11)–(14) and (15)–(20) respectively.
Rationale for Prohibiting Certain AI Practices
"(8) Article 5 AI Act prohibits the placing on the EU market, putting into service, or use of certain AI systems for manipulative, exploitative, social control or surveillance practices, which by their inherent nature violate fundamental rights and Union values."(Section (8) of the Guidelines)
In section (9), the Guidelines enumerate eight distinct prohibitions in Article 5(1), grounded on the AI Act’s principle that certain technologies or uses “contradict the Union values of respect for human dignity, freedom, equality, democracy, and the rule of law, as well as fundamental rights enshrined in the Charter” (section (8) of the Guidelines).
As explained in section (28) of the Guidelines, the rationale is that unlawful AI-based surveillance, manipulative or exploitative systems, and unfair scoring or profiling schemes “are particularly harmful and abusive and should be prohibited because they contradict the Union values of respect for human dignity, freedom, equality, democracy, and the rule of law.”
These prohibitions also respond to rapid AI developments that can facilitate large-scale data processing, possibly leading to heightened surveillance, discrimination, and erosion of autonomy. Section (4) of the Guidelines states that the prohibitions “should serve as practical guidance to assist competent authorities under the AI Act in their enforcement activities, as well as providers and deployers of AI systems in ensuring compliance.”
Enforcement of Article 5 AI Act
"(53) Market surveillance authorities designated by the Member States as well as the European Data Protection Supervisor (as the market surveillance authority for the EU institutions, agencies and bodies) are responsible for the enforcement of the rules in the AI Act for AI systems, including the prohibitions." (Section (53) of the Guidelines)
"(54) …Those authorities can take enforcement actions in relation to the prohibitions on their own initiative or following a complaint, which every affected person or any other natural or legal person having grounds to consider such violations has the right to lodge. …Member States must designate their competent market surveillance authorities by 2 August 2025." (Section (54) of the Guidelines)
As explained in section (53) of the Guidelines, enforcement occurs under the structure laid down by Regulation (EU) 2019/1020, adapted for AI. National market surveillance authorities will supervise compliance and “can take enforcement actions … or following a complaint” (section (54) of the Guidelines).
Where cross-border implications arise, “the authority of the Member State concerned must inform the Commission and the market surveillance authorities of other Member States,” triggering a possible Union safeguard procedure (section (54)–(55) of the Guidelines).
Penalties for Violations
"(55) Since violations of the prohibitions in Article 5 AI Act interfere the most with the freedoms of others and give rise to the highest fines, their scope should be interpreted narrowly." (Section (57) of the Guidelines, referencing discussion on fines)
"(55) …Providers and deployers engaging in prohibited AI practices may be fined up to EUR 35 000 000 or, if the offender is an undertaking, up to 7 % of its total worldwide annual turnover for the preceding financial year, whichever is higher." (Section (55) of the Guidelines)
Section (55) of the Guidelines notes that Article 99 AI Act sets out “a tiered approach … with the highest fines” reserved for breaches of Article 5. This penalty regime underscores the crucial nature of compliance with the prohibitions.
Furthermore, according to section (56) of the Guidelines, the “principle of ne bis in idem should be respected” if the same prohibited conduct infringes multiple AI Act provisions.
Applicability Timeline and Legal Effect
"(430) According to Article 113 AI Act, Article 5 AI Act applies as from 2 February 2025. The prohibitions in that provision will apply in principle to all AI systems regardless of whether they were placed on the market or put into service before or after that date."(Section (430) of the Guidelines)
As stated in section (431) of the Guidelines, enforcement and penalties become fully applicable six months after entry into force (on 2 August 2025).
Section (432) of the Guidelines clarifies that even though certain aspects of the enforcement framework only take effect on 2 August 2025, “the prohibitions themselves have direct effect” as from 2 February 2025. Affected persons may seek relief in national courts against prohibited AI practices even in the interim period.
Cooperation with Other Union Legislation
According to sections (42)–(52) of the Guidelines, the prohibitions interact with other EU measures, such as consumer law, data protection, and non-discrimination instruments. In particular, data protection authorities may issue guidance or take enforcement actions for personal data infringements “alongside or in addition to” AI Act breaches.
In short, enforcement is a multi-level process:
Providers must ensure compliance prior to placing AI systems on the market.
Deployers must ensure compliance during use, refraining from prohibited practices.
Market surveillance authorities coordinate oversight, able to impose fines and other measures against infringements.
PROHIBITED AI PRACTICES (ARTICLE 5 AI ACT)
Article 5 AI Act: General Prohibition and Rationale
“(8) Article 5 AI Act prohibits the placing on the EU market, putting into service, or use of certain AI systems for manipulative, exploitative, social control or surveillance practices, which by their inherent nature violate fundamental rights and Union values.” (Section (8) of the Guidelines)
“Recital 28 AI Act clarifies that such practices are particularly harmful and abusive and should be prohibited because they contradict the Union values of respect for human dignity, freedom, equality, democracy, and the rule of law, as well as fundamental rights enshrined in the Charter of Fundamental Rights of the European Union.” (Section (8) of the Guidelines)
According to section (8) of the Guidelines, the legislator identified certain “unacceptable risks” posed by specific AI uses — practices deemed inherently incompatible with fundamental rights, including the rights to privacy, autonomy, non-discrimination, and human dignity.
Prohibitions Listed in Article 5 AI Act
According to section (9) of the Guidelines, the AI Act enumerates eight prohibitions in Article 5(1). The Guidelines emphasize that these prohibitions “apply to the placing on the market, putting into service, or use of certain AI systems for manipulative, exploitative, social control or surveillance practices, which by their inherent nature violate fundamental rights and Union values” (section (8)). Unless a specific exception applies, these AI systems cannot be provided or deployed in the Union. Below is the full text of each prohibition as presented in the Guidelines:
Article 5(1)(a) – Harmful manipulation, and deception
“AI systems that deploy subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective or with the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm.”
Article 5(1)(b) – Harmful exploitation of vulnerabilities
“AI systems that exploit vulnerabilities due to age, disability or a specific social or economic situation, with the objective or with the effect of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm.”
Article 5(1)(c) – Social scoring
“AI systems for the evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to detrimental or unfavourable treatment in unrelated social contexts and/or unjustified or disproportionate treatment to the gravity of the social behaviour, regardless of whether provided or used by public or private persons.”
Article 5(1)(d) – Individual criminal offence risk assessment and prediction
“AI systems for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics; except to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to that criminal activity.”
Article 5(1)(e) – Untargeted scraping of facial images
“AI systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.”
Article 5(1)(f) – Emotion recognition
“AI systems that infer emotions of a natural person in the areas of workplace and education institutions, except where the use is intended to be put in place for medical or safety reasons.”
Article 5(1)(g) – Biometric categorisation
“AI systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex-life or sexual orientation; except any labelling or filtering of lawfully acquired biometric datasets, including in the area of law enforcement.”
Article 5(1)(h) – Real-time Remote Biometric Identification (RBI) Systems for Law Enforcement Purposes
“The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in so far as such use is strictly necessary for one of the following objectives: (i) the targeted search for specific victims of abduction, trafficking in human beings or sexual exploitation of human beings, as well as the search for missing persons; (ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or a genuine and present or genuine and foreseeable threat of a terrorist attack; (iii) the localisation or identification of a person suspected of having committed a criminal offence, for the purpose of conducting a criminal investigation or prosecution or executing a criminal penalty for offences referred to in Annex II and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least four years. …”
These eight prohibitions, as clarified in section (9) of the Guidelines, constitute “unacceptable risks” under Article 5 AI Act. Providers and deployers must refrain from making available or using AI systems that meet any of these descriptions, unless the AI Act itself provides for a narrowly interpreted exception (e.g., certain uses of real-time RBI for law enforcement).
Legal Basis and Material Scope
“(10) The AI Act is supported by two legal bases: Article 114 of the Treaty on the Functioning of the European Union (‘TFEU’) (the internal market legal basis) and Article 16 TFEU (the data protection legal basis).” (Section (10) of the Guidelines)
“(11) The practices prohibited by Article 5 AI Act relate to the placing on the market, the putting into service, or the use of specific AI systems.” (Section (11) of the Guidelines)
According to section (10) of the Guidelines, some prohibitions (notably the ban on real-time remote biometric identification for law enforcement, biometric categorisation, and individual risk assessments in law enforcement) derive from Article 16 TFEU, ensuring data protection. Others rely on Article 114 TFEU for the internal market.
Sections (12) through (14) clarify:
“Placing on the market” means the first supply of an AI system in the EU (section (12)).
“Putting into service” refers to the first use in the EU for its intended purpose (section (13)).
“Use” is interpreted “in a broad manner” (section (14)) to include any operation or deployment of an AI system after it is placed on the market/put into service.
Personal Scope: Responsible Actors
“(15) The AI Act distinguishes between different categories of operators in relation to AI systems: providers, deployers, importers, distributors, and product manufacturers.” (Section (15) of the Guidelines)
“(16) According to Article 3(3) AI Act, providers are natural or legal persons … that develop AI systems or have them developed and place them on the Union market, or put them into service under their own name or trademark.” (Section (16) of the Guidelines)
“(17) Deployers are natural or legal persons, public authorities, agencies or other bodies using AI systems under their authority, unless the use is for a personal non-professional activity.” (Section (17) of the Guidelines)
Sections (15)–(20) of the Guidelines explain how these roles “may overlap”, but each actor faces specific obligations for compliance with the prohibitions. In particular:
Providers must ensure their AI system is not prohibited upon placing it on the market or putting it into service.
Deployers must avoid usage scenarios that fall within a prohibited practice, even if the provider excludes it in the terms of use (section (14) of the Guidelines).
Exclusion from the Scope of the AI Act
“(21) Article 2 AI Act provides for a number of general exclusions from scope which are relevant for a complete understanding of the practical application of the prohibitions listed in Article 5 AI Act.” (Section (21) of the Guidelines)
(22) to (36) of the Guidelines specify exclusions such as national security, military or defence uses (Article 2(3)), judicial or law enforcement cooperation with third countries under certain agreements (Article 2(4)), R&D activities not placed on the market (Article 2(8)), and personal non-professional activities (Article 2(10)).
Interplay with Other Provisions and Union Law
“(37) The AI practices prohibited by Article 5 AI Act should be considered in relation to the AI systems classified as high-risk … In some cases, a high-risk AI may also qualify as a prohibited practice … if all conditions under one or more of the prohibitions … are fulfilled.” (Section (37) of the Guidelines)
“(42) The AI Act is a regulation that applies horizontally across all sectors without prejudice to other Union legislation, in particular on the protection of fundamental rights, consumer protection, employment, the protection of workers, and product safety.” (Section (42) of the Guidelines)
Sections (37)–(52) clarify:
Some systems not meeting the threshold for prohibition might still be “high-risk” under Article 6 AI Act or subject to other EU laws (section (37)).
The AI Act does not override data protection, consumer law, or non-discrimination statutes; these still apply (sections (42)–(45)).
The highest fines apply to breaches of Article 5 (section (55) of the Guidelines).
Enforcement Timeline
“(430) According to Article 113 AI Act, Article 5 AI Act applies as from 2 February 2025. … all providers and deployers engaging in prohibited AI practices may be subject to penalties, including fines up to 7 % of annual worldwide turnover for undertakings.”(Sections (430) and (55) of the Guidelines)
Even though full market surveillance mechanisms launch on 2 August 2025, the prohibitions (Article 5) are in force as of 2 February 2025. Affected individuals and authorities can invoke Article 5 bans immediately after that date (sections (430)–(432) of the Guidelines).
HARMFUL MANIPULATION AND EXPLOITATION (ARTICLE 5(1)(A) AND (B))
Rationale and Objectives
“(58) The first two prohibitions in Article 5(1)(a) and (b) AI Act aim to safeguard individuals and vulnerable persons from the significantly harmful effects of AI-enabled manipulation and exploitation. Those prohibitions target AI systems that deploy subliminal, purposefully manipulative or deceptive techniques that are significantly harmful and materially influence the behaviour of natural persons or group(s) of persons (Article 5(1)(a) AI Act) or exploit vulnerabilities due to age, disability, or a specific socio-economic situation (Article 5(1)(b) AI Act).”(Section (58) of the Guidelines)
According to section (59) of the Guidelines:
"(59) The underlying rationale of these prohibitions is to protect individual autonomy and well-being from manipulative, deceptive, and exploitative AI practices that can subvert and impair an individual’s autonomy, decision-making, and free choices. … The prohibitions aim to protect the right to human dignity (Article 1 of the Charter), which also constitutes the basis of all fundamental rights and includes individual autonomy as an essential aspect."
In section (59), the Guidelines also stress that Articles 5(1)(a) and (b) AI Act “fully align with the broader objectives of the AI Act to promote trustworthy and human-centric AI systems that are safe, transparent, fair and serve humanity and align with human agency and EU values.”
Article 5(1)(a) AI Act – Harmful Manipulation, Deception
“(60) Several cumulative conditions must be fulfilled for the prohibition in Article 5(1)(a) AI Act to apply: (i) The practice must constitute the ‘placing on the market’, ‘the putting into service’, or the ‘use’ of an AI system. (ii) The AI system must deploy subliminal (beyond a person's consciousness), purposefully manipulative or deceptive techniques. (iii) The techniques deployed by the AI system should have the objective or the effect of materially distorting the behaviour of a person or a group of persons. … (iv) The distorted behaviour must cause or be reasonably likely to cause significant harm to that person, another person, or a group of persons.” (Section (60) of the Guidelines)
According to section (63), the prohibition covers three broad technique types:
Subliminal techniques “beyond a person’s consciousness.”
Purposefully manipulative techniques “designed or objectively aim to influence … in a manner that undermines individual autonomy.”
Deceptive techniques “involving presenting false or misleading information with the objective or the effect of deceiving individuals.”
As section (70) of the Guidelines notes, the deception arises from “presenting false or misleading information in ways that aim to or have the effect of deceiving individuals and influencing their behaviour in a manner that undermines their autonomy, decision-making and free choices.”
Significant Harm and Material Distortion
“(77) The concept of ‘material distortion of the behaviour’ of a person or a group of persons is central to Article 5(1)(a) AI Act. It involves the deployment of subliminal, purposefully manipulative or deceptive techniques that are capable of influencing people’s behaviour in a manner that appreciably impairs their ability to make an informed decision … leading them to behave in a way that they would otherwise not have.”(Section (77) of the Guidelines)
“(86) The AI Act addresses various types of harmful effects associated with manipulative and deceptive AI systems … The main types of harms relevant for Article 5(1)(a) AI Act include physical, psychological, financial, and economic harms.”(Section (86) of the Guidelines)
Section (85) summarizes that for the prohibition to apply, the harm must be “significant”, and “there must be a plausible/reasonably likely causal link between the manipulative or deceptive technique … and the potential significant harm.”
Article 5(1)(b) AI Act – Exploitation of Vulnerabilities
“(98) Article 5(1)(b) AI Act prohibits the placing on the market, the putting into service, or the use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm.”(Section (98) of the Guidelines)
As explained in section (101):
“(101) To fall within the scope of the prohibition in Article 5(1)(b) AI Act, the AI system must exploit vulnerabilities inherent to certain individuals or groups of persons due to their age, disability or socio-economic situations, making them particularly susceptible to manipulative and exploitative practices.”
Sections (104)–(112) detail the specific vulnerabilities tied to:
Age (children, older persons),
Disability (cognitive, physical, mental impairments),
Specific socio-economic situation (e.g., extreme poverty, socio-economically disadvantaged, migrants).
Section (114) clarifies that the harm must again be “significant,” and (115) states:
"(115) For vulnerable groups — children, older persons, persons with disabilities, and socio-economically disadvantaged populations — these harms may be particularly severe and multifaceted due to their heightened susceptibility to exploitation."
Interplay Between Article 5(1)(a) and (b)
“(122) The interplay between the prohibitions in Article 5(1)(a) and (b) AI Act requires the delineation of the specific contexts that each provision covers to ensure that they are applied in a complementary manner.”(Section (122) of the Guidelines)
Section (123) of the Guidelines describes that:
Article 5(1)(a) “focuses on the techniques” (subliminal, manipulative, deceptive).
Article 5(1)(b) “focuses on the exploitation of specific vulnerable individuals or groups,” requiring vulnerabilities related to age, disability, or socio-economic situations.
The Guidelines highlight that “manipulative or deceptive techniques that specifically target the vulnerabilities of persons due to age, disability, or socio-economic situation” may overlap but fall more directly under Article 5(1)(b) if aimed at those recognized vulnerable groups (section (125)).
Out of Scope
“(127) Distinguishing manipulation from persuasion is crucial to delineate the scope of the prohibition in Article 5(1)(a) AI Act, which does not apply to lawful persuasion practices.”(Section (127) of the Guidelines)
Sections (128)–(133) detail “lawful persuasion,” standard advertising practices, and “medical treatment under certain conditions” that do not amount to harmful manipulation or exploitation.
For Article 5(1)(b), section (134) clarifies that “exploitative AI applications that are not reasonably likely to cause significant harms are outside the scope, even if they use manipulative or exploitative elements.”
SOCIAL SCORING (ARTICLE 5(1)(c))
Rationale and Objectives
“(146) While AI-enabled scoring can bring benefits to steer good behaviour, improve safety, efficiency or quality of services, there are certain ‘social scoring’ practices that treat or harm people unfairly and amount to social control and surveillance. The prohibition in Article 5(1)(c) AI Act targets such unacceptable AI-enabled ‘social scoring’ practices that assess or classify individuals or groups based on their social behaviour or personal characteristics and lead to detrimental or unfavourable treatment, in particular where the data comes from multiple unrelated social contexts or the treatment is disproportionate to the gravity of the social behaviour. The ‘social scoring’ prohibition has a broad scope of application in both public and private contexts and is not limited to a specific sector or field.”(Section (146) of the Guidelines)
According to section (147) of the Guidelines, social scoring systems often “lead to discriminatory and unfair outcomes for certain individuals and groups, including their exclusion from society, as well as social control and surveillance practices that are incompatible with Union values.”
Main Concepts and Components of the ‘Social Scoring’ Prohibition
“(149) Several cumulative conditions must be fulfilled for the prohibition in Article 5(1)(c) AI Act to apply: (i) The practice must constitute the ‘placing on the market’, the ‘putting into service’ or the ‘use’ of an AI system; (ii) The AI system must be intended or used for the evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics; (iii) The social score created with the assistance of the AI system must lead or be capable of leading to the detrimental or unfavourable treatment of persons or groups in one or more of the following scenarios: (a) in social contexts unrelated to those in which the data was originally generated or collected; and/or (b) treatment that is unjustified or disproportionate to the gravity of the social behaviour.” (Section (149) of the Guidelines)
‘Social Scoring’: Evaluation or Classification Over Time
“(151) The second condition for the prohibition in Article 5(1)(c) AI Act to apply is that the AI system is intended or used for the evaluation or classification of natural persons or groups of persons and assigns them scores based on their social behaviour or their known, inferred or predicted personal and personality characteristics. The score produced by the system may take various forms, such as a mathematical number or ranking.”(Sections (151)–(152) of the Guidelines)
Furthermore, section (155) clarifies that this must happen “over a certain period of time.” If data or behaviour from multiple contexts are aggregated without a clear, valid link to the legitimate purpose of the scoring, “the AI system is likely to fall under the prohibition.”
Detrimental or Unfavourable Treatment in Unrelated Social Contexts or Disproportionate Treatment
“(160) For the prohibition in Article 5(1)(c) AI Act to apply, the social score created by or with the assistance of an AI system must lead to a detrimental or unfavourable treatment for the evaluated person or group of persons in one or more of the following scenarios: (a) in social contexts unrelated to those in which the data was originally generated or collected; and/or (b) unjustified or disproportionate to the gravity of the social behaviour.” (Section (160) of the Guidelines)
Section (164) further explains “detrimental or unfavourable treatment” can mean denial of services, blacklisting, withdrawal of benefits, or other negative outcomes. It also covers cases where the social score “leads to broader exclusion or indirect harm.”
Out of Scope
“(173) The prohibition in Article 5(1)(c) AI Act only applies to the scoring of natural persons or groups of persons, thus excluding in principle legal entities where the evaluation is not based on personal or personality characteristics or social behaviour of individuals. … If the AI system evaluates or classifies a group of natural persons with direct impact on those persons, the practice may still fall within Article 5(1)(c) if all other conditions are fulfilled.”(Section (173) of the Guidelines)
Moreover, sections (175)–(176) clarify that lawful scoring practices for “specific legitimate evaluation purposes”, such as credit-scoring or fraud prevention, generally do not fall under the prohibition when done in compliance with Union and national law “ensuring that the detrimental or unfavourable treatment is justified and proportionate.”
Interplay with Other Union Legal Acts
“(178) Providers and deployers should carefully assess whether other applicable Union and national legislation applies to any particular AI scoring system used in their activities, in particular if there is more specific legislation that strictly regulates the types of data that can be used as relevant and necessary for specific evaluation purposes and if there are more specific rules and procedures to ensure justified and fair treatment.”(Section (178) of the Guidelines)
Section (180) highlights that social scoring must also comply with EU data protection law, consumer protection rules, and “union non-discrimination law” where relevant.
INDIVIDUAL CRIME RISK PREDICTION (ARTICLE 5(1)(d))
Rationale and Objectives
“(184) Article 5(1)(d) AI Act prohibits AI systems assessing or predicting the risk of a natural person committing a criminal offence based solely on profiling or assessing personality traits and characteristics.”(Section (184) of the Guidelines)
According to section (185), the provision “indicates, in its last phrase, that the prohibition does not apply if the AI system is used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to that activity.”
As clarified in section (186), the intention is to ensure “natural persons should be judged on their actual behaviour and not on AI-predicted behaviour based solely on their profiling, personality traits or characteristics.”
Main Concepts and Components of the Prohibition
“(187) Several cumulative conditions must be fulfilled for the prohibition in Article 5(1)(d) AI Act to apply: (i) The practice must constitute the ‘placing on the market’, ‘the putting into service for this specific purpose’ or the ‘use’ of an AI system; (ii) The AI system must make risk assessments that assess or predict the risk of a natural person committing a criminal offence; (iii) The risk assessment or the prediction must be based solely on either, or both, of the following: (a) the profiling of a natural person, (b) assessing a natural person’s personality traits and characteristics.”(Section (187) of the Guidelines)
Assessing or Predicting the Risk of a Person Committing a Crime
“(189) Crime prediction AI systems identify patterns within historical data, associating indicators with the likelihood of a crime occurring, and then generate risk scores as predictive outputs. … However, such use of historical data may perpetuate or reinforce biases and may result in crucial individual circumstances being overlooked.”(Section (189) of the Guidelines)
Section (191) notes that although “crime prediction AI systems bring opportunities … any forward-looking risk assessment or crime forecasting is caught by Article 5(1)(d) if it meets the other conditions, particularly if it is based solely on profiling or personality traits.”
‘Solely’ Based on Profiling or Personality Traits
“(193) The third condition for the prohibition in Article 5(1)(d) AI Act to apply is that the risk assessment to assess or predict the risk of a natural person committing a crime must be based solely on (a) the profiling of the person, or (b) assessing their personality traits and characteristics.”(Section (193) of the Guidelines)
As explained in section (200), “Where the system is based on additional, objective and verifiable facts directly linked to criminal activity, the prohibition does not apply (Article 5(1)(d) last phrase).”
Out of Scope
Exception for Supporting Human Assessment
“(203) Article 5(1)(d) AI Act provides, in its last phrase, that the prohibition does not apply to AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity.”(Section (203) of the Guidelines)
In section (205), the Guidelines recall the principle that “no adverse legal decision can be based solely on such AI output,” ensuring human oversight remains central.
Location-Based or Geospatial Predictive Policing
“(212) Location-based or geospatial predictive or place-based crime predictions … fall outside the scope of the prohibition, provided the AI system does not also profile an individual.”(Section (212) of the Guidelines)
If the AI system eventually singles out specific natural persons as potential offenders “solely based on profiling or personality traits,” it can fall under Article 5(1)(d).
Private Sector or Administrative Context
“(210) Where a private entity profiles customers for its ordinary business operations and safety, with the aim of protecting its own private interests, the use of AI systems to assess criminal risks is not deemed to be covered by the prohibition of Article 5(1)(d) AI Act unless the private operator is entrusted by law enforcement or subject to specific legal obligations for anti-money laundering or terrorism financing.”(Section (210) of the Guidelines)
Similarly, administrative offences (section (217)) do not fall within the prohibition if they are not classified as criminal under Union or national law.
Interplay with Other Union Legal Acts
“(219) The interplay of the prohibition in Article 5(1)(d) AI Act with the LED and GDPR is relevant when assessing the lawfulness of personal data processing … Article 11(3) LED prohibits profiling that results in direct or indirect discrimination.”(Section (219) of the Guidelines)
Section (220) notes the connection to Directive (EU) 2016/343 on the presumption of innocence, emphasizing that “the AI Act must not undermine procedural safeguards or the fundamental right to a fair trial.”
UNTARGETED SCRAPING OF FACIAL IMAGES (ARTICLE 5(1)(e))
Rationale and Objectives
“(222) Article 5(1)(e) AI Act prohibits the placing on the market, putting into service for this specific purpose, or the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the Internet or CCTV footage.”(Section (222) of the Guidelines)
According to section (223) of the Guidelines:
“(223) The untargeted scraping of facial images from the internet and from CCTV footage seriously interferes with individuals’ rights to privacy and data protection and deny those individuals the right to remain anonymous. … Such scraping can evoke a feeling of mass surveillance and lead to gross violations of fundamental rights, including the right to privacy.”
As clarified in section (224), this prohibition applies specifically to AI systems whose purpose is to “create or expand facial recognition databases” through the indiscriminate or “vacuum cleaner” approach of harvesting facial images.
Main Concepts and Components of the Prohibition
“(225) Several cumulative conditions must be fulfilled for the prohibition in Article 5(1)(e) AI Act to apply: (i) The practice must constitute the ‘placing on the market’, ‘the putting into service for this specific purpose’ or the ‘use’ of an AI system; (ii) for the purpose of creating or expanding facial recognition databases; (iii) the means to populate the database are through AI tools for untargeted scraping; and (iv) the sources of the images are either from the internet or CCTV footage.”(Section (225) of the Guidelines)
Facial Recognition Databases
“(226) The prohibition in Article 5(1)(e) AI Act covers AI systems used to create or expand facial recognition databases. ‘Database’ … is any collection of data or information specially organized for search and retrieval by a computer. A facial recognition database is capable of matching a human face from a digital image or video frame against a database of faces … .”(Section (226) of the Guidelines)
Untargeted Scraping of Facial Images
“(227) ‘Scraping’ typically refers to using web crawlers, bots, or other means to extract data or content from different sources, including CCTV, websites or social media, automatically. … ‘Untargeted’ means that the scraping operates without a specific focus on a given individual or group of individuals, effectively indiscriminately harvesting data or content.”(Section (227) of the Guidelines)
“(230) If a scraping tool is instructed to collect images or video containing human faces only of specific individuals or a pre-defined group of persons, then the scraping becomes targeted … the scraping of the Internet or CCTV footage for the creation of a database step-by-step … should fall within the prohibition if the end-result is functionally the same as pursuing untargeted scraping from the outset.”(Section (230) of the Guidelines)
From the Internet and CCTV Footage
“(231) For the prohibition in Article 5(1)(e) AI Act to apply, the source of the facial images may either be the Internet or CCTV footage. Regarding the internet, the fact that a person has published facial images of themselves on a social media platform does not mean that that person has given his or her consent for those images to be included in a facial recognition database.”(Section (231) of the Guidelines)
In section (232), the Guidelines exemplify real-life scenarios, including the use of automated crawlers to gather online photos containing human faces, or the use of software to systematically extract faces from CCTV feeds for a large database.
Out of Scope
“(234) The prohibition in Article 5(1)(e) AI Act does not apply to the untargeted scraping of biometric data other than facial images (such as voice samples). The prohibition does also not apply where no AI systems are involved in the scraping. Facial image databases that are not used for the recognition of persons are also out of scope, such as facial image databases used for AI model training or testing purposes, where the persons are not identified.”(Section (234) of the Guidelines)
As clarified in sections (235)–(236), the mere fact of collecting large amounts of images for other legitimate purposes does not automatically trigger the ban, provided the system “is not intended for, nor used to create or expand a facial recognition database.”
Interplay with Other Union Legal Acts
“(238) In relation to Union data protection law, the untargeted scraping of the internet or CCTV material to build-up or expand face recognition databases, i.e. the processing of personal data (collection of data and use of databases) would be unlawful and no legal basis under the GDPR, EUDPR and the LED could be relied upon.”(Section (238) of the Guidelines)
Section (238) further explains that the AI Act complements these data protection rules by banning such scraping at the level of placing on the market, putting into service, or use of the AI systems themselves.
EMOTION RECOGNITION (ARTICLE 5(1)(f))
Rationale and Objectives
“(239) Article 5(1)(f) AI Act prohibits AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the system is intended for medical or safety reasons.”(Section (239) of the Guidelines)
According to section (240), the ban reflects concerns regarding the “intrusive nature of emotion recognition technology, the uncertainty over its scientific basis, and its potential to undermine privacy, dignity, and individual autonomy.” As stated in section (241) of the Guidelines,
“(241) Emotion recognition can be used in multiple areas and domains … but it is also quickly evolving and comprehends different technologies, raising serious concerns about reliability, bias, and potential harm to human dignity and fundamental rights.”
Main Concepts and Components of the Prohibition
“(242) Several cumulative conditions must be fulfilled for the prohibition in Article 5(1)(f) AI Act to apply: (i) The practice must constitute the ‘placing on the market’, ‘the putting into service for this specific purpose’, or the ‘use’ of an AI system; (ii) AI system to infer emotions; (iii) in the area of the workplace or education and training institutions; and (iv) excluded from the prohibition are AI systems intended for medical or safety reasons.”(Section (242) of the Guidelines)
AI Systems to Infer Emotions
“(244) Inferring generally encompasses identifying as a prerequisite, so that the prohibition should be understood as including both AI systems identifying or inferring emotions or intentions … based on their biometric data.”(Section (244) of the Guidelines)
Sections (246)–(247) confirm that “emotion recognition” means “identifying or inferring emotional states from biometric data such as facial expressions, voice, or behavioural signals.”
Limitation to Workplace and Education
“(253) The prohibition in Article 5(1)(f) AI Act is limited to emotion recognition systems in the ‘areas of workplace and educational institutions’. … This aims to address the power imbalance in those contexts.”(Section (253) of the Guidelines)
According to section (254), “workplace” includes all settings where professional or self-employment activities occur (offices, factories, remote or mobile sites). As stated in section (255), “education institutions” include all levels of formal education, vocational training, and educational activities generally sanctioned by national authorities.
Exception for Medical or Safety Reasons
“(256) The prohibition in Article 5(1)(f) AI Act contains an explicit exception for emotion recognition systems used in the area of the workplace and education institutions for medical or safety reasons, such as systems for therapeutic use.”(Section (256) of the Guidelines)
Section (258) clarifies the narrow scope of that exception, stating that it only covers use “strictly necessary” to achieve a medical or safety objective. Further, in section (261), the Guidelines note that “detecting a person’s fatigue or pain in contexts like preventing accidents is considered distinct from ‘inferring emotions’ and may be allowed.”
More Favourable Member State Law
“(264) Article 2(11) AI Act provides that the Union or Member States may keep or introduce ‘laws, regulations or administrative provisions which are more favourable to workers in terms of protecting their rights in respect of the use of AI systems by employers’.”(Section (264) of the Guidelines)
Such stricter national laws or collective agreements could forbid emotion recognition entirely, even for medical or safety reasons in the workplace.
Out of Scope
“(266) Emotion recognition systems used in all other domains other than in the areas of the workplace and education institutions do not fall under the prohibition in Article 5(1)(f) AI Act. Such systems are, however, considered high-risk AI systems according to Annex III (1)(c).”(Section (266) of the Guidelines)
Additionally, per section (265), uses that do not involve biometric data (e.g. text-based sentiment analysis) or do not seek to infer emotions are not caught by the prohibition. The Guidelines note these systems may still be subject to other AI Act requirements or other legislation if potential manipulative or exploitative effects arise.
BIOMETRIC CATEGORISATION FOR SENSITIVE ATTRIBUTES (ARTICLE 5(1)(g))
Rationale and Objectives
“(272) A wide variety of information, including ‘sensitive’ information, may be extracted, deduced or inferred from biometric information, even without the knowledge of the persons concerned, to categorise those persons. This may lead to unfair and discriminatory treatment … and amounts to social control and surveillance that are incompatible with Union values. The prohibition of ‘biometric categorisation’ in Article 5(1)(g) AI Act aims to protect these fundamental rights.”(Section (272) of the Guidelines)
According to section (271), “Article 5(1)(g) AI Act prohibits biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex-life or sexual orientation.” The aim is to prevent “unfair, discriminatory and privacy-intrusive AI uses that rely on highly sensitive characteristics.”
Main Concepts and Components of the Prohibition
“(273) Several cumulative conditions must be fulfilled for the prohibition in Article 5(1)(g) AI Act to apply: (i) The practice must constitute the ‘placing on the market’, ‘the putting into service for this specific purpose’, or the ‘use’ of an AI system; (ii) The system must be a biometric categorisation system; (iii) individual persons must be categorised; (iv) based on their biometric data; (v) to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation.”(Section (273) of the Guidelines)
Biometric Categorisation System
“(276) ‘Biometric categorisation’ is typically the process of establishing whether the biometric data of an individual belongs to a group with some predefined characteristic. It is not about identifying an individual or verifying their identity, but about assigning an individual to a certain category.”(Section (276) of the Guidelines)
As section (277) notes, this includes the automated assignment of individuals to categories such as “race or ethnicity,” “religious beliefs,” or “political stance,” purely on the basis of features derived from biometric data.
Sensitive Characteristics: Race, Political Opinions, Religious Beliefs, etc.
“(283) Article 5(1)(g) AI Act prohibits only biometric categorisation systems which have as their objective to deduce or infer a limited number of sensitive characteristics: race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation.”(Section (283) of the Guidelines)
The Guidelines underscore (section (283)) that “the use of any ‘proxy’ or correlation-based approach that aims to deduce or infer these protected attributes from biometric data is likewise covered.”
Out of Scope
“(284) The prohibition in Article 5(1)(g) AI Act does not cover AI systems engaged in the labelling or filtering of lawfully acquired biometric datasets … if they do not entail the categorisation of actual persons to deduce or infer their sensitive attributes, but merely aim at ensuring balanced and representative data sets for training or testing.”(Section (284) of the Guidelines)
Section (285) clarifies that labeling or filtering biometric data to reduce bias or ensure representativeness is specifically exempted:
“(285) The labelling or filtering of biometric datasets may be done by biometric categorisation systems precisely to guarantee that the data equally represent all demographic groups, and not over-represent one specific group.”
Thus, mere dataset management or quality-control uses of biometric categorisation remain lawful if they do not aim to classify real individuals by their sensitive traits.
Interplay with Other Union Law
“(287) AI systems intended to be used for biometric categorisation according to sensitive attributes or characteristics protected under Article 9(1) GDPR on the basis of biometric data, in so far as these are not prohibited under this Regulation, are classified as high-risk under the AI Act (Recital 54 and Annex III, point (1)(b) AI Act).”(Section (287) of the Guidelines)
Section (289) notes that the AI Act’s ban under Article 5(1)(g) “further restricts the possibilities for a lawful personal data processing under Union data protection law, such as the GDPR … by excluding such practices at the earlier stage of placing on the market and use.”
REAL-TIME REMOTE BIOMETRIC IDENTIFICATION (RBI) FOR LAW ENFORCEMENT (ARTICLE 5(1)(h))
Rationale and Objectives
“(289) Article 5(1)(h) AI Act prohibits the use of real-time RBI systems in publicly accessible spaces for law enforcement purposes, subject to limited exceptions exhaustively set out in the AI Act.”(Section (289) of the Guidelines)
According to section (293):
“(293) Recital 32 AI Act acknowledges the intrusive nature of real-time RBI systems in publicly accessible spaces for law enforcement purposes … that can affect the private life of a large part of the population, evoke a feeling of constant surveillance, and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights.”
The Guidelines (section (295)) note that, unlike other prohibitions in Article 5(1) AI Act, the ban here concerns “the use” of real-time RBI (rather than its placing on the market or putting into service).
Main Concepts and Components of the Prohibition
“(295) Several cumulative conditions must be fulfilled for the prohibition in Article 5(1)(h) AI Act to apply: (i) The AI system must be a RBI system; (ii) The activity consists of the ‘use’ of that system; (iii) in ‘real-time’; (iv) in publicly accessible spaces, and (v) for law enforcement purposes.”(Section (295) of the Guidelines)
Remote Biometric Identification (RBI)
“(298) According to Article 3(41) AI Act, a RBI system is ‘an AI system for the purpose of identifying natural persons, without their active involvement, typically at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database.’”(Section (298) of the Guidelines)
Sections (299)–(303) clarify that “biometric identification” differs from verification (where a person’s identity claim is checked), focusing on “comparing captured biometric data with data in a reference database.”
Real-time
“(310) Real-time means that the system captures and further processes biometric data ‘instantaneously, near-instantaneously or in any event without any significant delay’.”(Section (310) of the Guidelines)
Section (311) points out that “real-time” also covers a short buffer of processing, ensuring no circumvention by artificially adding minimal delays.
Publicly Accessible Spaces
“(313) Article 3(44) AI Act defines publicly accessible spaces as ‘any publicly or privately owned physical space accessible to an undetermined number of natural persons, regardless of whether certain conditions for access may apply, and regardless of the potential capacity restrictions.’”(Section (313) of the Guidelines)
Sections (315)–(316) explain that “spaces such as stadiums, train stations, malls, or streets” are included, while purely private or restricted-access areas are excluded.
For Law Enforcement Purposes
“(320) Law enforcement is defined in Article 3(46) AI Act as the ‘activities carried out by law enforcement authorities or on their behalf for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including safeguarding against and preventing threats to public security.’”(Section (320) of the Guidelines)
Exceptions to the Prohibition
“(326) The AI Act provides three exceptions to the general prohibition on the use of real-time RBI in publicly accessible spaces for law enforcement purposes. Article 5(1)(h)(i) to (iii) AI Act exhaustively lists three objectives for which real-time RBI may be authorised … subject to strict conditions.”(Section (326) of the Guidelines)
Those objectives, detailed in sections (329)–(356), are:
Targeted search for victims of abduction, trafficking, or sexual exploitation, or missing persons.
Prevention of a specific, substantial, and imminent threat to life or safety, or a genuine and present or foreseeable threat of a terrorist attack.
Localisation or identification of suspects of the serious crimes listed in Annex II AI Act, punishable by at least four years of imprisonment.
As clarified in section (360), “any such use must be proportionate, strictly necessary, and limited in time, geography, and the specific targeted individual.”
Authorisation, Safeguards, and Conditions (Article 5(2)–(7))
“(379) Article 5(3) AI Act requires prior authorisation of each individual use of a real-time RBI system and prohibits automated decision-making based solely on its output … The deployer must also conduct a Fundamental Rights Impact Assessment (FRIA) in accordance with Article 27 AI Act.”(Section (379) of the Guidelines)
Section (381) underscores that the request for authorisation must show “objective evidence or clear indications” of necessity and proportionality, and that “no less intrusive measure is equally effective” for achieving the legitimate objective.
Out of Scope
“(426) All other uses of RBI systems that are not covered by the prohibition of Article 5(1)(h) AI Act fall within the category of high-risk AI systems … provided they fall within the scope of the AI Act.”(Section (426) of the Guidelines)
Sections (427)–(428) note that “retrospective (post) RBI systems” do not fall under the real-time ban but are still classified as high-risk and subject to additional obligations (Article 26(10) AI Act). Private sector uses in non-law enforcement contexts (e.g., stadium access control) likewise do not trigger this specific prohibition, though they must still comply with other AI Act requirements and Union data protection law.
* * *
Prokopiev Law Group stands ready to meet your AI and web3 compliance needs worldwide—whether you are exploring AI Act compliance, crypto licensing, web3 regulatory frameworks, NFT regulation, or DeFi and AML/KYC requirements. Our broad network spans the EU, US, UK, Switzerland, Singapore, Malta, Hong Kong, Australia, and Dubai, ensuring every local standard is met promptly and precisely. Write to us now for further details and let our proven legal strategies keep your projects fully compliant.
The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
Comments