News
The AI Act explained: ban on the exploitation of vulnerabilities (3/10)
Published on Jul 15, 2025
Author(s)
In August 2024, the European AI Regulation, also known as the AI Act, entered into force. While February 2025 may have seemed far off at the time, the first provisions of the AI Act are already applicable. As of 2 February 2025, organisations working with AI must ensure so-called ‘AI literacy’. Moreover, certain AI practices have already been prohibited since that date. If prohibited AI is still used—whether knowingly or unknowingly—this may now be subject to enforcement. In this blog series, the Holla AI team explains the rules that have applied since 2 February 2025. The blog below is the third in this series and focuses on the prohibited AI practices involving harmful exploitation of vulnerabilities.
Prohibition on the harmful exploitation of vulnerabilities
In our previous blog, we explained the ban on behavioural manipulation. This blog addresses the second prohibition under Article 5(1)(b) of the AI Act, namely the prohibition on the exploitation of vulnerabilities. As explained in our previous blog, there is some overlap with the prohibition in subparagraph (a). While (a) focuses on the nature of the techniques used (such as flashing images to influence purchasing behaviour), (b) is aimed at providing additional protection for vulnerable persons who are more susceptible to exploitation by AI. Subparagraph (b) concerns situations in which certain characteristics of a group are actively exploited. For example, elderly individuals may be offered more expensive insurance products based on their reduced cognitive abilities.
In short, the AI Act prohibits AI systems that exploit vulnerabilities of a person or a specific group of persons due to their age, disability, or specific social or economic situation, where the aim or effect is a significant impairment of their behaviour, potentially causing substantial harm.
This prohibition applies if the following conditions are all met:
-
The practice involves the placing on the market, putting into service, or use of an AI system;
-
The AI system exploits vulnerabilities related to age, disability, or a specific socioeconomic situation;
-
The exploitation results in or is intended to result in a significant impairment of the behaviour of a person or group;
-
The impaired behaviour causes or is reasonably likely to cause substantial harm to that person or another person.
All four conditions must be met simultaneously, and there must be a plausible link between the exploitation, the behavioural impairment, and the resulting substantial harm. We discussed the first, third, and fourth conditions in our second blog. In this blog, we will focus in particular on the second condition.
Exploitation based on age, disability, or specific socioeconomic situation
The AI Act does not define the term ‘vulnerabilities’. However, the term should be interpreted broadly and includes cognitive, emotional, and physical susceptibility that undermines a person’s ability to make informed choices or to resist undue influence.
Although Article 5(1)(b) refers to ‘any vulnerability’, the prohibition is currently limited to vulnerabilities related to age, disability, or a specific socioeconomic situation. According to the European Commission, these are situations where individuals are generally less able to recognise or counteract AI-driven manipulation and therefore require additional protection. Other vulnerabilities currently fall outside the scope of this prohibition.
The term ‘exploitation’ should be understood as the objective use of such vulnerabilities in a way that is harmful to the affected person or group. Legitimate practices that do not fall under this prohibition will be discussed further below.
Age
Age is a vulnerability category that includes both younger and older individuals. The objective is to prevent AI systems from taking advantage of limitations in children and the elderly, thereby exposing them to undue influence, manipulation, or exploitation. For example, children may form emotional attachments to AI applications, making them more vulnerable to manipulation and addiction. Extra caution is therefore necessary.
An example falling under this prohibition is an AI-powered toy robot that encourages children to undertake increasingly risky challenges, such as climbing trees. In exchange, they receive digital rewards, which may result in physical harm and negatively impact school performance, concentration, and addiction-related behaviour.
Another example is an AI system that deliberately misleads elderly people with dementia into making financial decisions, potentially causing significant harm.
Disability
The second category concerns individuals with disabilities. The European Commission defines disability as a long-term physical, mental, intellectual, or sensory impairment that hinders full and equal participation in society. AI systems that exploit these limitations can be especially harmful to this group.
An example would be an AI chatbot for mental healthcare that manipulates individuals with intellectual disabilities into purchasing expensive health-related products.
Note that an AI system that is simply inaccessible to people with disabilities does not fall under this prohibition.
Specific socioeconomic situation
The third category refers to specific socioeconomic situations that make individuals more susceptible to exploitation. ‘Specific’ refers to legal or factual group characteristics, such as poverty, minority status, migrant or refugee status. Temporary conditions like unemployment or debt can also be included. General human emotions like loneliness are generally outside the scope of this prohibition.
People in socioeconomically vulnerable situations often have fewer resources and digital skills, making them more prone to abuse by AI systems. The prohibition aims to prevent the amplification of existing inequalities.
An example of a prohibited practice is a predictive AI system that targets loan advertisements at residents of low-income neighbourhoods, potentially causing significant financial harm.
AI systems with unintended discriminatory effects (e.g., due to biased training data) are not automatically prohibited. However, if the provider or user is aware of these effects and fails to take appropriate measures, it may still be considered exploitation.
Aim or effect of significant behavioural distortion
The third condition for the prohibition to apply is that the misuse of vulnerabilities must be intended to or actually result in a significant impairment of the behaviour of a person or group. This refers to substantial influence that undermines a person’s autonomy and freedom of choice.
Articles 5(1)(a) and 5(1)(b) of the AI Act use the same terms and should be interpreted consistently. The only difference is that subparagraph (a) requires that the exploitative practice significantly impairs the ability to make an informed decision. This is not required under subparagraph (b) because children and other vulnerable persons are, by nature, less capable of making informed decisions.
Examples of harm
For vulnerable groups, the resulting harm can be even more severe due to their heightened susceptibility to exploitation. What may be considered an acceptable risk for an adult might amount to unacceptable harm for a child.
Examples of harm include:
-
AI systems used to generate child abuse material or grooming strategies, often resulting in severe and lasting physical, psychological, and social harm.
-
AI systems that exploit young people’s susceptibility to addiction, leading to anxiety, depression, eating disorders, self-harm, and suicidal behaviour.
-
Social AI chatbots that create emotional dependency in children by mimicking human emotions, distorting their understanding of real relationships and causing lasting social-emotional harm.
-
AI systems that deceive elderly individuals into purchasing expensive or unnecessary medical treatments or insurance, resulting in financial loss and emotional stress.
This may concern individual harm, but also group harm. Effects on third parties may also be considered when assessing the severity of the harm. For example, misinformation or hate speech generated by AI chatbots targeting specific groups can lead to polarisation, violence, and even fatalities. However, only exploitation causing substantial harm falls under the prohibition. Legitimate, supportive AI applications are not prohibited.
Legitimate practices
The prohibitions in subparagraphs (a) and (b) do not apply to lawful influencing practices. The distinction with manipulation is therefore crucial. Manipulation often involves covert techniques that undermine autonomy, causing individuals to make decisions they would not have made if fully aware of the influence. These techniques typically exploit psychological weaknesses or cognitive biases.
In contrast, legitimate influencing practices remain transparent and respectful of autonomy. The AI system clearly communicates its purpose and functioning and supports the user in making a free and informed choice. User consent can play a positive role in this context.
Moreover, manipulation is usually aimed at gaining an advantage at the expense of the individual’s autonomy or well-being. Influencing, by contrast, seeks to inform and persuade, balancing the interests and benefits of both parties. Ethically responsible use also requires that influencing respects a person’s autonomy and does not exploit vulnerabilities. Compliance with legal frameworks also plays a role in assessing whether manipulation or lawful influence is involved.
Examples of legitimate practices include:
-
Educational AI for children
-
Assistive AI in elderly care
-
AI promoting inclusion for socioeconomically disadvantaged groups
-
Tools for people with visual or hearing impairments
What if you are using prohibited AI?
It is crucial to verify as soon as possible that no prohibited AI practices are taking place within your organisation. If such practices are in place, this can result in significant fines of up to EUR 35 million or 7% of global annual turnover. The fact that a designated supervisory authority has not yet been appointed for these prohibitions does not mean enforcement is not possible. These prohibitions have been directly applicable since 2 February 2025, and any violation of the AI Act constitutes a wrongful act. Citizens or companies that suffer harm as a result may already initiate legal proceedings.
Take action now: conduct the AI Scan!
In short, compliance with the AI Act requires a structured approach within your organisation. We are happy to assist you in identifying and classifying the AI systems used in your organisation and implementing the applicable obligations. We do this in collaboration with an external partner in three steps: a knowledge session, an AI scan, and implementation.
Does your organisation need assistance with these steps? The Holla AI team offers the AI scan to help you get started.
If you have any questions, please don’t hesitate to contact us. We’re happy to help, and keep an eye on our website for more articles in this blog series.
Our people
Do you have any questions or would you like to make an appointment?