News
The AI Act explained: ban on individual predictive policing (5/10)
Published on Jul 15, 2025
Our people
In August 2024, the European AI Regulation, also known as the AI Act, entered into force. While February 2025 may have seemed far away at the time, the first provisions of the AI Act are already in effect. As of 2 February 2025, organisations using AI must ensure so-called ‘AI literacy’. Additionally, certain AI practices have already been prohibited. If prohibited AI is still being used, either knowingly or unknowingly, this may now be subject to enforcement. In this blog series, the Holla AI team explains the rules that have applied since 2 February 2025. The blog below is the fifth in this series and focuses on the prohibited AI practices related to individual predictive policing.
What Is predictive policing?
Prevention is better than cure. This certainly applies to crimes that put public safety at risk. Law enforcement agencies such as the police are therefore highly motivated to predict who might commit a crime, so that it can be prevented before it happens. Thanks to the rise of AI, software now exists that can make such predictions. Personal data is entered into the AI system, which then calculates the likelihood that the individual will commit a crime. If that probability is high enough, the police may intervene to prevent the offence.
Sounds promising, right? Not quite. Under the AI Act, the use of such predictive policing systems at the individual level is prohibited. This prohibition is rooted in the fundamental principles of the presumption of innocence and human dignity. People must be judged based on what they actually do, not on what they might do according to AI-driven predictions, nor based on personal traits (such as nationality, race, or what car they drive). In other words: whether someone is flagged as a potential offender should be determined by a human, not a machine. For that reason, some individual predictive policing systems are prohibited.
When does the ban on individual predictive policing apply?
In short, a system falls under this prohibition if the following conditions are met:
-
There is an AI system that assesses the risk of a specific person committing a criminal offence; and
-
This risk assessment is based solely on one of the following elements:
-
Profiling of the person. This means that a profile is created using the individual’s personal data, which is then used to evaluate aspects such as their economic status, health, or interests; or
-
Assessment of personal characteristics or traits of the person. This includes characteristics such as nationality, place of birth, place of residence, number of children, debt level, or type of car.
-
Let’s look at some examples to clarify these criteria:
Example 1: A police department uses an AI system to assess whether a person is likely to engage in criminal behaviour. This meets the first condition. The system uses data such as age, nationality, address, type of car, and relationship status. This satisfies the second condition, item (b), as the assessment is based on personal traits. This system is therefore prohibited.
Example 2: A tax authority uses an AI system to predict whether someone is committing tax fraud (first condition). The prediction is based solely on personal characteristics like dual nationality, place of birth, and number of children (second condition, item (b)). Again, the risk assessment is solely based on personal traits, and therefore the system is prohibited.
When Is predictive policing allowed under the AI Act?
AI systems that assess the risk of criminal behaviour can still be valuable tools. There are certain scenarios where such systems are permitted.
1. An AI system may be used under the AI Act as a supporting tool in a human-led assessment of whether a person is involved in a criminal offence. The key condition is that this human assessment must be based on objective and verifiable facts.
For example, if there is already a reasonable suspicion that someone has committed a crime, and the AI system is only used to substantiate that suspicion. Suppose someone has purchased a weapon or is active on a terrorist forum, these are objective indicators that can justify the use of a predictive tool.
2. Private organisations may also use AI systems for conducting risk assessments, but only for internal purposes, and not for police or government authorities. If the system incidentally detects something criminal, that’s not a violation, as long as the connection to criminal activity is purely coincidental.
3. It is also permitted to use location-based or area-based predictions, where the focus is not on a specific natural person but rather on the characteristics of a location, for example, to predict smuggling routes or detect gunshots in urban areas via microphones.
4. AI systems used to assess risks of administrative offences are not prohibited either. Administrative violations are minor infractions that do not lead to criminal prosecution but may result in fines or regulatory actions. Examples include minor traffic violations or small errors in tax filings.
Although the AI Act introduces a ban on individual predictive policing systems, there are exceptions that ensure such systems can still be used in a way that respects human rights and dignity.
What if you are using prohibited AI?
It is essential to verify as soon as possible that your organisation is not deploying prohibited AI practices. If such practices are identified, your organisation may face substantial fines of up to EUR 35 million or 7% of global annual turnover.
The fact that no official supervisory authority has yet been appointed for these AI prohibitions does not mean no action can be taken. These bans have been directly applicable since 2 February 2025, and any violation constitutes a wrongful act. Citizens or companies suffering damage can already initiate legal proceedings.
Take action now: conduct the AI Scan!
In short, compliance with the AI Act requires a structured approach within your organisation. We are happy to support you in mapping and classifying your AI systems and implementing the necessary obligations.
We do this in collaboration with an external partner in three steps: a knowledge session, an AI scan, and implementation.
If your organisation needs help completing these steps, the Holla AI team offers an AI scan to ensure compliance with the AI Act.
Do you have questions? Contact us,we’re happy to help. And keep an eye on our website for more articles in this blog series.
Our people
Do you have any questions or would you like to make an appointment?