News

The AI Act explained: the first rules under the AI Act (1/10)

Published on May 27, 2025

De eerste regels uit de AI Act

In August 2024, the European AI Regulation, commonly known as the AI Act, entered into force. While February 2025 may have seemed far off at the time, the first rules of the AI Act have already come into effect. As of 2 February 2025, organizations working with AI must ensure so-called "AI literacy." Additionally, certain AI practices have already been banned. Whether knowingly or unknowingly, use of prohibited AI may already be subject to enforcement. In this blog series, the Holla AI team will discuss the rules that have applied since 2 February 2025. This first blog explains the definition of an AI system and the obligation to ensure AI literacy.

Applicability of the AI Act: what applies as of 2 February 2025?

The AI Act was created to regulate the development and use of AI systems and to ensure greater product safety. You can read our earlier blog series on AI and the AI Act here. As previously announced, the AI Act will be phased in over time. Since 2 February 2025, the first two chapters of the AI Act are applicable. This includes the general provisions, including definitions. To determine whether the AI Act is relevant for your organization, it is crucial to assess whether the systems you use qualify as AI systems under this new legislation. Alongside the definitions, the obligation to ensure AI literacy is already in effect, and certain AI practices have already been banned. We will explain these new rules in this article.

What is an AI system according to the AI Act?

In practice, an AI system is typically described as a system capable of performing tasks that usually require human intelligence. However, the legal definition in the AI Act is more complex and reads as follows:

“A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

Explanation of the Definition

Fortunately, on 6 February 2025, the European Commission published (draft) guidelines on this definition (see here). These guidelines elaborate on the different components:

  • "Machine-based" means that the system must be developed with and run on machines, including both the hardware and software components that make the AI system function—such as memory, storage devices, network units, and also computer code, programs, and operating systems.

  • "Varying levels of autonomy" means the system is designed to operate with some degree of independence from human intervention or action.

  • "May exhibit adaptiveness" refers to the system's capacity to learn and adapt its behavior over time while still using the same input data. The word "may" indicates that self-learning is not required, but the potential for learning or uncovering new patterns beyond the initial training is a relevant feature.

  • "Explicit or implicit objectives" means the system must be designed to operate based on clearly formulated goals (explicit) or goals that can be inferred from the system’s behavior, training data, or interaction with its environment (implicit).

  • "Infers how to generate output from received input" implies that the system must be capable of inference—deriving output from input, both during operation and development. Techniques that support this include machine learning, (un)supervised learning, deep learning, and logic- or knowledge-based approaches.

  • Finally, the system must generate outputs—such as predictions, content, recommendations, or decisions—“that can influence physical or virtual environments.” This underscores that the system must have an active influence on its environment, whether physical (e.g., a robotic arm) or virtual (e.g., data streams).

In short, the definition is broad and can cover a wide range of systems. The European Commission also provides examples of systems that do not meet these criteria, such as simple decision rules, manually coded software, and traditional statistical models.

AI Literacy: a new obligation for organizations

Once a system is classified as an AI system, the AI Act applies—and since 2 February 2025, all AI systems are subject to the requirement of AI literacy. AI literacy is defined in the AI Act as:

“Skills, knowledge and understanding that enable providers, deployers and affected persons to make informed decisions regarding AI systems and to be aware of the opportunities and risks of AI, including potential harm.”

Who Does This Obligation Apply To?

In short, AI literacy means that both providers and users of AI systems must take measures to ensure a sufficient level of skills, knowledge, and understanding about AI. This involves raising awareness about the opportunities, risks, and ethical considerations of AI. The obligation applies to employees and other individuals who operate or use AI systems on behalf of your organization. Consideration must be given to their technical expertise, experience, education, training, and the context in which the systems are used—as well as the individuals affected by these systems.

The Four Steps to AI Literacy

The Dutch Data Protection Authority (“AP”) emphasizes in its document "Getting Started with AI Literacy" that AI literacy is essential for strengthening society’s resilience in dealing with AI and algorithms. It is seen as a foundational element for the responsible use of AI. According to the AP, AI literacy ensures that individuals can use AI with both confidence and critical awareness.

Although the exact criteria for a "sufficient level" remain unclear, the AI Office and the European Commission have yet to publish specific guidelines. The AP also notes that there is no one-size-fits-all solution but does provide a framework for a multi-year action plan on AI literacy.

AI literacy is a continuous process, and the AP defines four steps:

  1. Identify: Determine needs and challenges, register AI systems and assess their risk classification, identify existing knowledge levels.

  2. Set goals: Define specific, measurable objectives; prioritize risks associated with AI systems; establish targeted measures for key risks.

  3. Implement: Execute strategies and actions, provide training and awareness programs, include AI literacy on management agendas.

  4. Evaluate: Include AI literacy in periodic evaluations, assess effectiveness of measures, report to management.

The AP highlights that AI literacy has a preventive function, helping organizations comply with relevant laws and regulations, including the AI Act. The key takeaway is that organizations must take a proactive approach to embedding AI literacy within their structure—making it also a board-level responsibility.

Prohibited AI practices

In addition to AI literacy, certain AI practices have also been prohibited since 2 February 2025. Even if organizations believe they are not using prohibited AI, this may not always be the case. The AI Act currently lists eight prohibited AI practices, and the European Commission published (draft) guidelines on 4 February 2025 (see here). Future blog posts in this series will cover these eight practices in more detail.

Each AI system must be assessed to determine whether it falls under one of the prohibited practices. The European Commission reviews this list annually, which means additional practices could be added or removed in the coming years.

What If You Are Using Prohibited AI?

It is crucial to determine as soon as possible whether prohibited AI practices exist within your organization. If so, you may face hefty fines—up to €35 million or 7% of annual global turnover. The fact that there is not yet a designated supervisory authority does not mean that no action can be taken. These prohibitions have been directly applicable since 2 February 2025, and a violation constitutes a legal infringement. Individuals or organizations that suffer harm as a result may already initiate legal proceedings.[1]

Take action now: Do the AI Scan!

In conclusion, compliance with the AI Act requires a structured approach within your organization. We are here to help you identify and classify AI systems and implement the applicable obligations. Together with an external partner, we offer a three-step process: a knowledge session, an AI scan, and implementation. These steps align with the roadmap now also recommended by the AP.

If your organization needs assistance in carrying out these steps, the Holla AI team offers the AI scan. Do you have questions about this? Please contact us. We are happy to assist you—and stay tuned for future articles in this blog series.

[1] List of Questions and Answers Regarding the Publication 'Focus on AI' by the Central Government (Kamerstuk 26643-1226).