News
AI Act explained: ban on real-time remote biometric identification in public spaces (9/10)
Published on Jul 31, 2025
Author(s)
Our people
In August 2024, the European AI Regulation, also known as the AI Act, entered into force. Although February 2025 may have seemed far off at the time, the first rules of the AI Act are already in effect. For example, as of 2 February 2025, organisations working with AI must ensure so-called ‘AI literacy’. In addition, certain AI practices have already been prohibited since that date. If prohibited AI is still being used, either knowingly or unknowingly, it may now be subject to enforcement.
In this blog series, the Holla AI team discusses the rules that have applied since 2 February 2025. The blog below is the ninth in this series and addresses the eighth and final prohibition on AI systems: real-time remote biometric identification in public spaces. You can read our earlier blog series on AI and the AI Act here.
What is biometric identification and why is it risky?
Citizens are increasingly confronted with biometric identification. Biometric identification is the process of identifying a natural person based on biometric data. These are “personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that person.”[1] With biometric data, it is possible to uniquely and unequivocally identify a specific individual—using, for example, a fingerprint, iris or retina scan, voice or facial recognition, or the distance between the eyes.
We note that biometric identification is different from biometric verification. Biometric verification also uses biometric data but involves a one-to-one comparison only. Biometric identification, on the other hand, involves comparing a profile against multiple profiles in a database. An example of biometric verification is unlocking your smartphone using facial recognition or a fingerprint: your phone only unlocks when the presented fingerprint matches your registered profile.
Because biometric data are unique, they pose high risks to individuals’ privacy. A person can change their password if it is compromised in a data breach, but not their fingerprint. The processing of biometric data is therefore initially prohibited unless one of the strictly listed exceptions applies.[2]
What exactly does the AI Act prohibit?
As a result, the AI Act prohibits certain forms of biometric identification. Not all biometric identification is prohibited, only real-time remote biometric identification systems used in publicly accessible spaces for law enforcement purposes. Quite a mouthful. These systems are referred to in the AI Act as Remote Biometric Identification (RBI) systems. It therefore concerns biometric identification that takes place remotely, and in real-time, in public spaces.
Think of cameras in a train station that scan the faces of passers-by and immediately compare them to a police database to see whether someone is wanted.
The prohibition applies when all of the following conditions are met:
-
The AI system must be an RBI system;
-
The activity performed must involve the use of that system;
-
The use must occur in real-time;
-
The use must take place in publicly accessible, open spaces;
-
The use must be for law enforcement purposes.
We note that, unlike other prohibitions, the AI Act prohibits only the use of such real-time RBI systems. Placing on the market or putting into service of these systems is not prohibited. However, such systems are subject to the rules for high-risk AI systems.[3]
We further note that real-time RBI systems involve biometric identification, not biometric verification. Biometric verification is less intrusive and is not prohibited under the AI Act, but it must of course still be reliable and secure.
The AI Act mentions only three situations in which the prohibition does not apply. This is the case if the use is strictly necessary for one of the following objectives:
-
The targeted search for specific victims of abduction, human trafficking, or sexual exploitation of persons, as well as locating missing persons;
-
The prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons, or an actual and ongoing or foreseeable threat of a terrorist attack;
-
The location or identification of a person suspected of committing a criminal offence, for the purpose of conducting a criminal investigation or prosecution, or enforcing a criminal penalty for the offences listed in Annex II, which in the relevant Member State are punishable by a custodial sentence or detention measure with a maximum duration of at least four years.
Only in those cases is the use of real-time RBI systems permitted—and even then, additional conditions apply, such as the need to obtain prior authorisation granted by a judicial authority or an independent administrative authority.
Liability and fines for prohibited AI practices
It is essential to determine as soon as possible whether any prohibited AI practices exist within your organisation. If such practices are found, this can result in substantial fines of up to EUR 35 million or 7% of total annual turnover.
The fact that no specific supervisory authority has yet been appointed for these prohibited AI practices does not mean enforcement cannot take place. These prohibitions have been directly applicable since 2 February 2025, and any violation of the AI Act therefore constitutes a wrongful act. Citizens or businesses suffering damage as a result may already initiate legal proceedings.
Take action now: conduct the AI scan!
In short, complying with the AI Act requires a structured approach within your organisation. We are happy to help you identify, classify, and implement the obligations applicable to AI systems in your organisation.
We do this together with an external partner, in three steps: a knowledge session, an AI scan, and implementation. With this approach, we have already anticipated the steps now also being recommended by the Dutch Data Protection Authority (AP).
If your organisation needs help completing these steps, the Holla AI team offers the AI compliance scan.
Do you have any questions? Contact us. We are happy to assist—and keep an eye on our website for the remaining articles in this blog series.
Our people
Do you have any questions or would you like to make an appointment?