News
The AI Act explained: the first rules under the AI Act (1/10)
Published on May 27, 2025
Author(s)
In August 2024, the European AI Regulation, commonly referred to as the AI Act, entered into force. While February 2025 may have seemed a distant prospect at the time, the first provisions of the AI Act have already come into effect. As of 2 February 2025, organisations working with AI are required to ensure so-called "AI literacy". In addition, certain AI practices have already been prohibited. Whether knowingly or not, the use of such banned AI systems may already be subject to enforcement action. In this blog series, the Holla AI team will explore the rules that have applied since 2 February 2025. This first blog sets out the definition of an AI system and the obligation to ensure AI literacy.
Applicability of the AI Act: what applies as of 2 February 2025?
The AI Act was established to regulate the development and use of AI systems and to enhance product safety. You can read our earlier blog series on AI and the AI Act here. As previously announced, the AI Act will be implemented in stages. Since 2 February 2025, the first two chapters of the Act have become applicable. These include the general provisions, such as key definitions. To determine whether the AI Act applies to your organisation, it is essential to assess whether the systems you use fall within the scope of AI systems as defined by this legislation. In addition to the definitions, the obligation to ensure AI literacy is already in force, and certain AI practices have been explicitly prohibited. We will outline these new rules in this article.
What is an AI system according to the AI Act?
In practice, an AI system is generally understood to be a system capable of performing tasks that typically require human intelligence. However, the legal definition provided in the AI Act is more complex, and reads as follows:
“A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
Explanation of the definition
Fortunately, on 6 February 2025, the European Commission published (draft) guidelines on this definition (see here). These guidelines elaborate on the different components:
- "Machine-based" means that the system must be developed using, and operate on, machines. This includes both the hardware and software components that enable the AI system to function — such as memory, storage devices, network units, as well as computer code, programmes, and operating systems.
- "Varying levels of autonomy" indicates that the system is designed to operate with some degree of independence from human intervention or control.
- "May exhibit adaptiveness" refers to the system’s capacity to learn and adapt its behaviour over time, even when using the same input data. The term "may" suggests that self-learning is not a mandatory feature, but the potential for learning or identifying new patterns beyond initial training is a relevant characteristic.
- "Explicit or implicit objectives" means that the system must be designed to operate either on the basis of clearly defined goals (explicit), or goals that can be inferred from its behaviour, training data, or interactions with its environment (implicit).
- "Infers how to generate output from received input" implies that the system must be capable of inference — deriving outputs from inputs — both during development and in real-time operation. This process may involve machine learning, (un)supervised learning, deep learning, or logic- and knowledge-based methods.
Finally, the system must produce outputs — such as predictions, content, recommendations, or decisions — “that can influence physical or virtual environments.” This highlights that the system must exert an active influence on its surroundings, whether physical (e.g. a robotic arm) or virtual (e.g. data flows or digital interfaces).
In short, the definition is broad and may encompass a wide range of systems. The European Commission has clarified that certain systems fall outside the scope of this definition, such as simple decision trees, manually coded software, and traditional statistical models.
AI Literacy: a new obligation for organizations
Once a system is classified as an AI system, the AI Act applies. Since 2 February 2025, all AI systems are subject to the requirement of AI literacy.
The AI Act defines AI literacy as: “Skills, knowledge and understanding that enable providers, deployers and affected persons to make informed decisions regarding AI systems and to be aware of the opportunities and risks of AI, including potential harm.”
Who does this obligation apply to?
In short, AI literacy means that both providers and users of AI systems must take measures to ensure a sufficient level of skills, knowledge, and understanding of AI. This includes raising awareness of the opportunities, risks, and ethical considerations associated with AI.
The obligation applies to employees and other individuals who operate or use AI systems on behalf of your organisation. Consideration must be given to their technical expertise, experience, education, training, and the context in which the systems are deployed, as well as to the individuals affected by these systems.
The four steps to AI literacy
The Dutch Data Protection Authority (“AP”) emphasizes in its document "Getting Started with AI Literacy" that AI literacy is essential for strengthening society’s resilience in dealing with AI and algorithms. It is seen as a foundational element for the responsible use of AI. According to the AP, AI literacy ensures that individuals can use AI with both confidence and critical awareness.
Although the exact criteria for a "sufficient level" of AI literacy remain unclear, the AI Office and the European Commission have not yet issued specific guidelines. The Dutch Data Protection Authority (Autoriteit Persoonsgegevens, AP) also notes that there is no one-size-fits-all solution, but does provide a framework for a multi-year action plan on AI literacy.
AI literacy is a continuous process. The AP identifies four key steps:
1. Identify: Determine needs and challenges, register AI systems and assess their risk classification, and identify current levels of knowledge within the organisation.
2. Set goals: Define specific, measurable objectives; prioritise risks associated with AI systems; and establish targeted measures to address key risks.
3. Implement: Carry out strategies and actions, offer training and awareness programmes, and ensure AI literacy is placed on the management agenda.
4. Evaluate: Include AI literacy in regular evaluations, assess the effectiveness of measures taken, and report findings to senior management.
The AP emphasises that AI literacy has a preventive function, supporting organisations in complying with applicable laws and regulations, including the AI Act. The key takeaway is that organisations must adopt a proactive approach to embedding AI literacy throughout their structure — making it a responsibility that extends to board level.
Prohibited AI practices
In addition to AI literacy, certain AI practices have also been prohibited since 2 February 2025. Even if organisations believe they are not using prohibited AI, this may not always be the case. The AI Act currently lists eight prohibited AI practices, and the European Commission published (draft) guidelines on 4 February 2025 (see here). Future blog posts in this series will cover these eight practices in more detail.
Each AI system must be assessed to determine whether it falls under one of the prohibited practices. The European Commission reviews this list annually, which means additional practices could be added or removed in the coming years.
What if you are using prohibited AI?
It is crucial to determine as soon as possible whether prohibited AI practices exist within your organisation. If so, you may face hefty fines, up to €35 million or 7% of annual global turnover. The fact that there is not yet a designated supervisory authority does not mean that no action can be taken. These prohibitions have been directly applicable since 2 February 2025, and a violation constitutes a legal infringement. Individuals or organisations that suffer harm as a result may already initiate legal proceedings.[1]
Take action now: do the AI Scan!
In short, compliance with the AI Act requires a structured approach within your organization. We are here to help you identify and classify AI systems and implement the applicable obligations. Together with an external partner, we offer a three-step process: a knowledge session, an AI scan, and implementation. These steps align with the roadmap now also recommended by the AP.
If your organisation needs assistance in carrying out these steps, the Holla AI team offers the AI scan. Do you have questions about this? Please contact us. We are happy to assist you, and stay tuned for future articles in this blog series.
Our people
Do you have any questions or would you like to make an appointment?