The European Commission (the ‘Commission’) issued the Guidelines on the definition of an artificial intelligence system under Regulation (EU) 2024/1689 (“AI Act”). The AI Act entered into force on 1 August 2024; it lays down harmonised rules for the development, placing on the market, putting into service, and use of artificial intelligence (‘AI’) in the Union.
The Guidelines focus on clarifying Article 3(1) AI Act, which defines an “AI system” and therefore determines the scope of the AI Act. They are meant to help providers and other relevant persons (including market and institutional stakeholders) decide whether a specific system meets the definition of an AI system. They emphasize that the definition took effect on 2 February 2025, alongside relevant provisions (Chapters I and II, including prohibited AI practices under Article 5).
The Guidelines are not legally binding; the ultimate interpretation belongs to the Court of Justice of the European Union.
Key Elements of the AI System Definition
AI System
Article 3 (1) of the AI Act defines an AI system as follows:‘“AI system” means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;’
According to the Guidelines, this definition comprises seven main elements:
A machine-based system;
Designed to operate with varying levels of autonomy;
That may exhibit adaptiveness after deployment;
For explicit or implicit objectives;
Infers, from the input it receives, how to generate outputs;
Such outputs include predictions, content, recommendations, or decisions;
Which can influence physical or virtual environments.
These elements should be interpreted with an understanding that AI systems exhibit machine-driven functionality, some autonomy, and possibly self-learning capabilities, but always within a context of producing outputs that “can influence” their surroundings.
Machine-Based System
The term ‘machine-based’ refers to the fact that AI systems are developed with and run on machines… The hardware components refer to the physical elements of the machine… The software components encompass computer code, instructions, programs, operating systems, and applications…”
The Guidelines clarifies that “All AI systems are machine-based…” to emphasize computational processes (model training, data processing, large-scale automated decisions). This covers a wide variety of computational systems, including advanced quantum ones.
Autonomy
The second element of the definition refers to the system being ‘designed to operate with varying levels of autonomy’. Recital 12 of the AI Act clarifies that the terms ‘varying levels of autonomy’ mean that AI systems are designed to operate with ‘some degree of independence of actions from human involvement and of capabilities to operate without human intervention’.
Full manual human involvement excludes a system from being considered AI. A system needing manual inputs to generate an output can still have “some degree of independence of action,” making it an AI system. Autonomy and risk considerations become particularly important in high-risk use contexts (as listed in Annex I and Annex III of the AI Act).
Adaptiveness
“(22) The third element… is that the system ‘may exhibit adaptiveness after deployment’. … ‘adaptiveness’ refers to self-learning capabilities, allowing the behaviour of the system to change while in use.”
The word “may” means adaptiveness is not mandatory for a system to be classified as AI. Even if a system does not automatically adapt post-deployment, it may still qualify if it meets the other criteria.
AI System Objectives
“(24) The fourth element… AI systems are designed to operate according to one or more objectives. The objectives… may be different from the intended purpose of the AI system in a specific context.”
Objectives are internal to the system, such as maximizing accuracy. The intended purpose (Article 3(12) AI Act) is external, reflecting the practical use context.
Inferencing How to Generate Outputs
“(26) The fifth element of an AI system is that it must be able to infer, from the input it receives, how to generate outputs. …This capability to infer is therefore a key, indispensable condition that distinguishes AI systems from other types of systems.”
The capacity to derive models, algorithms, and outputs from data sets AI apart from simpler software that “automatically execute[s] operations” via predefined rules alone.
AI Techniques that Enable Inference
“(30) Focusing specifically on the building phase… ‘machine learning approaches that learn from data how to achieve certain objectives, and logic- and knowledge-based approaches that infer from encoded knowledge or symbolic representation of the task to be solved.’ Those techniques should be understood as ‘AI techniques’.”
Machine Learning approaches:
Supervised (e.g., spam detection)
Unsupervised (e.g., drug discovery)
Self-supervised (e.g., predicting missing pixels, language models)
Reinforcement (e.g., autonomous vehicles, robotics)
Deep Learning (e.g., large neural networks)
Logic- and Knowledge-Based approaches:
Use encoded knowledge, symbolic rules, and reasoning engines.
The Guidelines cite examples such as classical natural language processing models based on grammatical logic, expert systems for medical diagnosis, etc.
Systems Outside the Scope
“(40) Recital 12 also explains that the AI system definition should distinguish AI systems from ‘simpler traditional software systems or programming approaches and should not cover systems that are based on the rules defined solely by natural persons to automatically execute operations.’”
Systems aimed at improving mathematical optimization (e.g., accelerating well-established linear regression methods, parameter tuning in satellite telecommunication systems) remain outside if they do not “transcend ‘basic data processing.’”
Basic data processing (sorting, filtering, static descriptive analysis, or visualizations) with no learning or reasoning also does not qualify.
“Systems based on classical heuristics” (experience-based problem-solving that is not data-driven learning) are excluded.
Simple prediction systems, employing trivial estimations or benchmarks (e.g., always predict the mean) do not meet the threshold for “AI system” performance.
Outputs That Can Influence Physical or Virtual Environments
“(52) The sixth element… the system infers ‘how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments’. … The capacity to generate outputs… is fundamental to what AI systems do and what distinguishes those systems from other forms of software.”
The Guidelines detail four output categories:
Predictions
Content
Recommendations
Decisions
Each type represents increasing levels of automatic functionality. Systems that produce these outputs from learned or encoded approaches generally fit the AI criteria.
Interaction with the Environment
“(60) The seventh element of the definition of an AI system is that system’s outputs ‘can influence physical or virtual environments’. That element should be understood to emphasise the fact that AI systems are not passive, but actively impact the environments in which they are deployed.”
Influence may be physical (like controlling a robot arm) or digital (e.g., altering a user interface or data flows).
Concluding Remarks
“(61) The definition of an AI system encompasses a wide spectrum of systems. The determination of whether a software system is an AI system should be based on the specific architecture and functionality of a given system…” “(63) Only certain AI systems are subject to regulatory obligations and oversight under the AI Act. …The vast majority of systems, even if they qualify as AI systems… will not be subject to any regulatory requirements under the AI Act.”
This underscores the risk-based approach in the AI Act: most AI systems face no or minimal obligations, while high-risk systems come under stricter prohibitions (Article 5), conformity requirements (Chapter II, Article 6), or transparency rules (Article 50). The Guidelines highlight that general-purpose AI models also fall under the AI Act (Chapter V), but detailed distinctions between them and “AI systems” exceed the scope of these Guidelines.
Overall, these Guidelines precisely delineate what qualifies as an AI system. They serve as a structured reference for developers, providers, and other stakeholders to assess whether a given solution falls under Regulation (EU) 2024/1689.
If you need further guidance on AI compliance, DeFi compliance, NFT compliance, DAO governance, Metaverse regulations, MiCA regulation, stablecoin regulation, or any other web3 legal matters, write to us. Prokopiev Law Group has a broad global network of partners, ensuring your compliance worldwide, including in the EU, US, Singapore, Switzerland, Hong Kong, and Dubai.
The information provided is not legal, tax, investment, or accounting advice and should not be used as such. It is for discussion purposes only. Seek guidance from your own legal counsel and advisors on any matters. The views presented are those of the author and not any other individual or organization. Some parts of the text may be automatically generated. The author of this material makes no guarantees or warranties about the accuracy or completeness of the information.
コメント