top of page

New AI Landscape: A Comprehensive Guide to the European AI Act

In an era where artificial intelligence (AI) increasingly interweaves with our daily lives and global economy, adequate legislative oversight may be paramount. As we venture deeper into the AI age, there emerges a strong call for a regulatory structure that not only fosters technological advancement but also ensures the protection of individuals and societies at large. Enter the AI Act, a new legal framework introduced by the European Parliament to chart the course for AI development and usage.


Definition and Scope


One of the Act's crowning achievements is codifying a consistent definition for AI systems. This consensus was the product of thoughtful deliberations among various political factions within the European Parliament. The AI Act characterizes an AI system as a "machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments."


This definition, though more constrained than the preliminary draft, is essentially in line with the definition provided by the Organisation for Economic Co-operation and Development (OECD). This harmonization is a bedrock for the Act's material scope and, consequently, for the entities affected by its regulatory mechanisms.


The Four-Tier Risk-Based Model


The AI Act embarks on an innovative path by adopting a risk-based regulatory model classified into four tiers (low-risk, limited-risk, high-risk, and prohibited AI systems). This model bases its classification on the risk posed by the AI system to its users and potential third parties. Simply put, the greater the risk an AI system presents, the stricter its regulations will be subject to.

Prohibited AI systems The AI Act delineates clear boundaries on deploying certain AI systems deemed potentially harmful or abusive. The prohibited practices are as follows:

  1. Subliminal Techniques: The Act prohibits AI systems that use subliminal or manipulative techniques which could materially distort a person's behavior, compelling them to make uninformed decisions to their detriment. However, it allows an exception for AI systems used for approved therapeutic purposes, provided there is explicit informed consent from the individuals or, if necessary, their legal guardians.

  2. Exploitation of Vulnerabilities: The Act also bans AI systems designed to exploit vulnerabilities related to an individual's personality traits, social or economic situation, age, or physical or mental abilities. This prohibition targets AI systems that aim to materially distort behavior that could cause significant harm to the individual or group.

  3. Biometric Categorisation Systems: These systems categorize individuals based on sensitive or protected attributes, either directly inferred or predicted. Such AI systems are banned, except for approved therapeutic use, and only with the specific informed consent of the individuals involved or their legal guardians.

  4. Social Scoring: The Act disallows the deployment of AI systems by public authorities for social scoring purposes, that is, evaluating individuals' trustworthiness based on their social behavior or predicted personality traits. The prohibition extends to any application leading to detrimental or unfavorable treatment of individuals in contexts unrelated to the data's original context or if the treatment is unjustified or disproportionate to their social behavior.

  5. Risk Assessment for Offending: AI systems used for assessing the risk of a person committing or recommitting a crime, or predicting the occurrence of a potential criminal or administrative offense, are prohibited. These systems might make assessments based on profiling or evaluating personality traits, including the person's location or past criminal behavior.

  6. Facial Recognition Databases: The Act also disallows AI systems that create or expand facial recognition databases through unregulated extraction of facial images from the internet or CCTV footage.

  7. Inferring Emotions: Lastly, using AI systems to infer an individual's emotions in law enforcement, border management, workplaces, and educational institutions is prohibited.

High-risk AI systems

Under the AI Act, several AI systems are considered high-risk and subject to special regulation. These are:

  1. Biometric Systems: Systems used for biometric identification and inference of personal characteristics based on biometric data are regulated. However, systems used solely to confirm a person's identity are exempted.

  2. Critical Infrastructure Management: AI systems designed as safety components for the management and operation of critical infrastructure, such as traffic control or utilities, fall under this category.

  3. Education and Vocational Training: Systems that influence access to education, assess students' performance, determine educational levels, or monitor students' behavior during tests are regulated.

  4. Employment and Workers Management: AI systems used in recruitment or selection processes, as well as those that influence decisions relating to work contracts, task allocation based on personal traits, and performance and behavior monitoring are covered under this.

  5. Access to Public and Essential Private Services: Systems used by authorities to assess eligibility for public assistance benefits and services or creditworthiness, systems influencing health and life insurance decisions, and systems used to evaluate and classify emergency calls or dispatch emergency services fall into this category.

  6. Law Enforcement: Certain AI systems used by law enforcement, including those used for evaluating the reliability of evidence, profiling, and crime analytics, are subject to these regulations.

  7. Migration, Asylum, and Border Control: AI systems used for assessing risks, verifying document authenticity, assessing the veracity of evidence related to asylum applications, and monitoring or predicting migration and border crossing trends are included.

  8. Administration of Justice and Democratic Processes: AI systems used to assist judicial authorities in researching and interpreting the law, influencing the outcome of elections, and systems used by social media platforms for content recommendations are also regulated.

Generative AI Systems and the Emergence of "Foundation Models"


With the rise of generative AI systems such as Midjourney, ChatGPT, and Bard, the AI Act has taken on the task of effectively encapsulating such systems within the regulatory framework. These AI systems will henceforth be recognized as "Foundation Models". Even the mere classification of an AI system as a "Foundation Model" entails certain restrictions alongside the risk-based classifications.


Particularly for providers of such AI systems, significant obligations are in the offing, including transparency and disclosure requirements. This, undoubtedly, signifies a turning point in the trajectory of generative AI governance.


The Road to AI Act Enforcement


AI Act Approval Process


The AI Act, proposed by the European Commission, must traverse the ordinary legislative procedure. This process entails approval from the Council of Ministers (or the Council of the EU) and the European Parliament. After securing approval, the Act must undergo final negotiations - the "trilogue procedure" - between the Commission, the Council, and the Parliament.


Expected Timeframe of Enforcement


Should the trilogue procedure conclude with an agreement within the year, the Act could formally come into force by mid-2024. However, it's important to note that the AI Act, being a regulation, would be immediately applicable, despite a 24-month transitional period.

This transitional period presents an opportunity for all parties involved to gear up for the AI Act's imminent impact, laying the groundwork for the shift in how we regulate AI systems.


Preparing Businesses for the AI Act


What Can Businesses Do Today?


In anticipation of the AI Act's enactment, proactive steps can be taken to mitigate potential pitfalls. Companies, especially those where AI systems form a crucial component of their operations, or those planning substantial investment in AI systems, should start adjusting their strategies. A thorough assessment of AI systems currently in use or planned for future deployment is essential to ensure that no high-risk or prohibited AI system is unknowingly employed.


Understanding Legal Implications: Intellectual Property, Confidentiality, and Data Protection


Three key legal facets to consider in the context of AI usage include intellectual property rights, confidentiality, and data protection:

  • Intellectual Property: An AI system often requires large training data, usually sourced from public repositories. Companies must ensure the legality of using such data. Additionally, the output of generative AI systems may not qualify for intellectual property rights, a fact worth considering depending on the intended use.

  • Confidentiality: When using AI systems requiring user input, the data's security should be guaranteed via technical-organizational measures or through restricting access to confidential information or trade secrets.

  • Data Protection: When AI systems process personal data, GDPR requirements, and local member state laws must be observed. This involves ensuring the correct roles for parties involved as per GDPR, appropriate legal bases for all processing operations, implementing technical and organizational measures, and conducting data protection impact assessments.

Evaluating Terms of Use


Companies should scrutinize the underlying terms of use before deploying AI systems. This includes clarifying performance specifications, liability clauses, data protection documentation, and confidentiality. Businesses must be particularly cautious when using public AI systems, where contracts with the AI provider may not yet be in place, leading to potential legal risks and possible contradictions with the company's internal guidelines.


The Need for Early Engagement with Regulation


Engagement with potential regulations should begin at the earliest stage possible, as companies may need to adjust their business models, product offerings, technical processes, and governance and compliance approaches. Such proactive engagement can facilitate a smoother transition to a regulated AI landscape.


International Considerations


Organizations operating beyond the EU must consider the AI Act in conjunction with regulations in other jurisdictions. This necessitates a versatile compliance strategy capable of accommodating divergent regulatory frameworks.


* * * Don't leave your AI compliance to chance. With evolving regulations and high stakes, you need legal experts who are up to speed with the complexities of AI legislation. Prokopiev Law Group is here to guide you through each step, ensuring your AI systems meet all necessary legal and ethical requirements. Don't wait until you face a compliance issue. Act now. Reach out to Prokopiev Law Group today and protect your business from tomorrow's legal challenges. We're ready when you are.


DISCLAIMER: The information provided is not legal, tax, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be AI-generated. The information provided is for general educational purposes only and is not investment advice. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information. A professional should review any action based on the information discussed. The author is not liable for any loss from acting on the information discussed.

Commentaires


bottom of page