News

AI Act explained: one year after entry into force (10/10)

Published on Aug 7, 2025

Author(s)

one year after entry into force

In August 2024, the European AI Regulation, also known as the AI Act, entered into force. Although February 2025 may have seemed far off at the time, the first rules of the AI Act are already applicable. For example, as of 2 February 2025, organisations working with AI must ensure so-called ‘AI literacy’. In addition, certain AI practices have already been prohibited since that date. If prohibited AI is still being used, knowingly or unknowingly, this may already be subject to enforcement.

In this blog series, the Holla AI team addresses the rules that have been in force since 2 February 2025. The blog below is the tenth and final blog in this series. You can read our earlier blog on AI and the AI Act here.

AI literacy and prohibited AI

Since 2 February 2025, the first two chapters of the AI Act have been applicable. In this blog series, we have explored these first two chapters and covered the obligation of AI literacy (first blog) and the prohibited practices (blogs two through nine).

Since 2 February 2025, organisations are required to ensure a certain level of AI literacy among employees who work with AI. Without sufficient knowledge and awareness of the risks of AI, its use is not sufficiently responsible. Training must be both ethically and legally grounded, enabling employees to use AI responsibly and take advantage of its opportunities.

Furthermore, AI practices involving unacceptable risk have been prohibited since 2 February 2025. In short, these practices include the following:

  • Behavioural manipulation (blog): AI systems that exploit individuals’ vulnerabilities to unduly influence behaviour are prohibited. Think of systems that use nudging or deception to steer users toward choices that go against their interests.

  • Exploitation of vulnerabilities (blog): AI systems specifically designed to exploit physical or mental limitations—such as in children, the elderly, or persons with disabilities—are prohibited. Protecting these groups is central.

  • Social scoring (blog): Assigning scores to citizens based on behaviour or social characteristics is prohibited. This prohibition reflects fundamental European values of non-discrimination and human dignity.

  • Individual predictive policing (blog): AI systems that predict future criminal offences at the individual level based on profiling data are prohibited. There are a few exceptions to this rule.

  • Indiscriminate scraping of facial images (blog): The indiscriminate collection of facial images from public sources to build a database, for example to train facial recognition models, is prohibited.

  • Emotion recognition (blog): The use of AI to detect emotions in employees and students is prohibited. There are a few exceptions to this rule.

  • Biometric categorisation (blog): AI systems that categorise people based on biometric data are not permitted if this leads to discrimination or stigmatisation. There are some exceptions to this prohibition.

  • Real-time remote biometric identification in public spaces (blog): This type of facial recognition—such as in busy city centres or at airports—is only permitted under very strict conditions, such as for the investigation of serious crimes and with prior judicial authorisation.

New rules again?

Although the first rules are already in force, the implementation process is complex and has faced delays. On 2 August 2025, several more chapters of the AI Act were scheduled to apply, but this deadline has not been met in all respects. For example, no supervisory authority has been appointed yet. However, this does not mean that the rules cannot be enforced. The prohibitions have been directly applicable since 2 February 2025, and any violation of the AI Act constitutes a wrongful act. Citizens or companies who suffer damages as a result may therefore already initiate legal proceedings.

GPAI obligations and code of practice

Furthermore, on 10 July 2025, the European Commission published a “General Purpose AI Code of Practice” (see here) and on 18 July 2025, guidelines regarding the obligations for providers of general-purpose AI models (see here). The code of practice is a voluntary compliance instrument specifically designed to support providers of general-purpose AI models (such as GPT-4 or Claude) in meeting the obligations of the AI Act, particularly in the areas of safety, transparency, and copyright.

The European Commission and the AI Board have confirmed that the code is an adequate instrument to demonstrate compliance with the AI Act, which has been mandatory since 2 August 2025 for providers of general-purpose AI models. By voluntarily signing and implementing the code of practice, developers can not only demonstrate that they are acting in accordance with the AI Act but also benefit from reduced administrative burdens and increased legal certainty, according to the European Commission.

Implementation paused?

Given the delays in the implementation process, the Swedish Prime Minister recently called for a temporary pause in the execution of the AI Act. The idea of pausing the rollout of the AI Act has also gained support from Brussels. There have been signals from the European Commission that a pause may be possible if the necessary guidelines are not yet ready. However, no formal decision to suspend implementation has been made (yet). On the contrary, the European Commission is continuing its preparations and continues to publish additional guidelines to provide support.

For now, the deadline of 2 August 2026 for full compliance with the AI Act remains in effect. Organisations are therefore well advised to continue with their implementation strategy and to anticipate the (upcoming) obligations in a timely manner.

So take action now: conduct the AI scan!

We are happy to support you in identifying and classifying AI systems within your organisation and carrying out the existing obligations. We do this together with an external partner in three steps: a knowledge session, an AI scan, and implementation. With this approach, we have already anticipated the steps now also being prescribed by the Dutch Data Protection Authority (AP).

If your organisation needs assistance in completing these steps, the Holla AI team offers the AI scan.
If you have any questions, please contact us. We are happy to assist you, and keep an eye on our website for the other articles in our blog series.

Our people

Do you have any questions or would you like to make an appointment?