News
The AI Act explained: ban on biometric categorisation of natural persons (8/10)
Published on Jul 21, 2025
Author(s)
Our people
In August 2024, the European AI Regulation, also known as the AI Act, entered into force. Although February 2025 may have seemed far away at the time, the first rules of the AI Act are already applicable. For example, as of 2 February 2025, organisations working with AI must ensure so-called ‘AI literacy’. Additionally, certain AI practices have been prohibited as of that date. If prohibited AI is still used, knowingly or unknowingly, this may already be subject to enforcement. In this blog series, the Holla AI team addresses the rules that apply as of 2 February 2025. The blog below is the eighth in the series and discusses prohibited AI practices relating to ‘biometric categorisation’ based on sensitive attributes.
What is biometric categorisation and why is it risky?
Biometric categorisation refers to assigning natural persons to specific categories based on their ‘biometric data’. ‘Biometric data’ refers to personal data resulting from specific technical processing relating to the physical, physiological, or behavioural characteristics of a natural person, such as facial images or fingerprint data. Biometric categorisation does not involve identifying or verifying identity, but rather assigning individuals to certain categories.
A wide variety of information, including ‘sensitive’ information, can be obtained, derived, or interpreted from biometric data, even without the person’s knowledge, and used to classify individuals. This can lead to unfair and discriminatory treatment, for example when someone is denied access to a service based on perceived race.
AI systems used for biometric categorisation to assign natural persons to specific groups or categories, such as to infer their sexual orientation or political beliefs, violate human dignity and pose significant risks to other fundamental rights, such as privacy and non-discrimination.
What exactly does the AI Act prohibit?
The AI Act therefore includes a prohibition on AI-based biometric categorisation systems in Article 5(1)(g) of the AI Act. The ban applies to both providers and users of AI systems, and is applicable when five cumulative conditions are met:
-
The system is placed on the market, put into service, or used;
-
The system is a biometric categorisation system;
-
Natural persons are categorised;
-
Based on their biometric data;
-
With the purpose of inferring or detecting their race, political opinions, trade union membership, religious or philosophical beliefs, sexual behaviour, or sexual orientation.
What is a ‘Biometric Categorisation System’?
Categorisation by a biometric system means assessing whether a person’s biometric data fits into a pre-defined group. For example: a digital billboard displays different advertisements depending on the viewer’s perceived age or gender.
Biometric categorisation can be based on physical features, such as facial structure, skin colour, or other external traits, on the basis of which people are then classified into specific categories. Some of these categories may be ‘sensitive’ or relate to characteristics protected under EU non-discrimination law, such as race.
Biometric categorisation can also be based on DNA or behavioural aspects, such as a person’s gait.
The AI Act defines a biometric categorisation system as an AI system that classifies natural persons into specific categories based on their biometric data, unless this is merely an ancillary function of another commercial service and is strictly necessary for technical reasons.
Examples of such permitted uses include:
-
Filters within social networks allowing users to alter their appearance in photos or videos. These are permitted because they function solely within the platform’s core service: sharing online content.
-
Filters within online marketplaces that categorise facial or body features to allow consumers to visualise or “try on” products. These too are allowed, as they function only within the main service: selling products.
The following applications, by contrast, are prohibited:
-
An AI system that categorises individuals on a social media platform based on presumed political preference by analysing biometric data from uploaded photos in order to target them with political messages. While this function may be ancillary to political advertising, it is not strictly necessary for objective technical reasons, and is therefore prohibited.
-
An AI system that categorises users of a social media platform based on their presumed sexual orientation by analysing biometric data from shared photos and then displaying targeted advertisements. Here too, the strict technical necessity is lacking, and the practice is therefore prohibited.
As mentioned, the AI Act only prohibits biometric categorisation systems with the purpose of inferring or assuming a limited number of sensitive attributes, namely: race, political opinions, trade union membership, religious or philosophical beliefs, sexual behaviour, or sexual orientation.
An example of such a prohibited system is one that claims to infer a person’s race from their voice or one that claims to infer a person’s religious beliefs from tattoos or facial features.
Labelling or filtering is permitted
The prohibition does not apply to the labelling or filtering of lawfully obtained biometric datasets, including when used for law enforcement.
Labelling or filtering biometric datasets may in fact be performed by biometric categorisation systems to ensure balanced representation across demographic groups and to avoid overrepresentation of one particular group.
If the data used to train an algorithm is biased against a particular group, the algorithm may adopt this bias, leading to unlawful discrimination. It may therefore be necessary to label biometric data based on protected sensitive information to prevent discrimination and ensure data quality.
This type of biometric data labelling or filtering is explicitly exempted from the prohibition. Examples include:
-
Labelling biometric data to prevent members of an ethnic group from being less likely to receive a job interview, due to training data in which that group historically performed worse.
-
Categorising patients by skin or eye colour in medical images, for example for diagnosing cancer.
Liability and fines for prohibited AI practices
It is essential to determine as soon as possible whether prohibited AI practices exist within your organisation. If such practices are present, this may result in significant fines of up to EUR 35 million or 7% of total global turnover.
The absence of a designated supervisory authority for prohibited AI practices at this time does not mean that no enforcement can take place. These prohibitions have been directly applicable since 2 February 2025, and any violation of the AI Act constitutes a wrongful act.
Citizens or companies suffering damage as a result may already initiate legal proceedings.
Take action now: conduct the AI Scan!
In short, complying with the AI Act requires a structured approach within your organisation. We are happy to assist you in identifying, classifying, and implementing the applicable obligations for your AI systems.
We do this in collaboration with an external partner in three steps: a knowledge session, an AI scan, and implementation.
If your organisation needs help completing these steps, the Holla AI team offers a tailored AI compliance scan.
Have any questions? Contact us. We are happy to assist, and keep an eye on our website for the next blogs in this series.
Our people
Do you have any questions or would you like to make an appointment?