News
The AI Act explained: Ban on Behavioural Manipulation (2/10)
Published on Jun 4, 2025
Author(s)
Our people

In August 2024, the European AI Regulation, commonly known as the AI Act, entered into force. While February 2025 may have seemed far off at the time, the first rules of the AI Act have already come into effect. As of 2 February 2025, organizations working with AI must ensure so-called "AI literacy." Additionally, certain AI practices have already been banned. Whether knowingly or unknowingly, use of prohibited AI may already be subject to enforcement.
In this blog series, the Holla AI team explains the rules in effect since 2 February 2025. This is the second article in the series and focuses on the ban on behavioural manipulation through AI.Ban on Behavioural Manipulation
In our previous blog, we explained the definition of an AI system and the requirement for AI literacy. This blog focuses on the first prohibition under Article 5(1)(a) of the AI Act — the ban on behavioural manipulation.
In short, AI systems that use techniques to influence individuals in covert, deceptive, or manipulative ways, causing them to act in ways they otherwise would not, are prohibited. Such practices are likely to cause significant harm to those affected.
This prohibition applies when all of the following conditions are met:
-
The practice involves placing on the market, putting into service, or using an AI system;
-
The AI system employs subliminal, deliberately manipulative, or deceptive techniques;
-
The techniques are intended or likely to distort the behaviour of an individual or group of individuals;
-
The distorted behaviour is likely to cause significant harm to the individual or another person.
The European Commission’s guidance elaborates on these conditions as follows:
1. Placing on the market, putting into service, or using
Both providers and users are prohibited from placing on the market, putting into service, or using prohibited AI systems.
-
Placing on the market means making an AI system available on the European market for distribution or use, whether for payment or free of charge.
-
Putting into service means providing an AI system to a user for the first time for its intended use.
-
Using is broader and refers to the actual application of an AI system in a given context or process — for example, a European organisation using a third-party AI system in its HR processes.
2. Subliminal, deliberately manipulative, or deceptive techniques
The second condition prohibits the use of subliminal, deliberately manipulative, or deceptive techniques:
Subliminal techniques
Subliminal techniques influence behaviour by bypassing rational defences, affecting decisions without the individual’s conscious awareness. These may involve subtle audio, visual, or tactile stimuli — the latter being perceptible through touch. While these stimuli are not consciously perceived, they can still be processed by the brain and influence behaviour.
For example, AI systems might display images or text that are technically visible but shown too quickly for conscious recognition, while still impacting behaviour. Similarly, AI systems could play low-volume sounds or sounds masked by other noises to influence individuals without their awareness.
Deliberately manipulative techniques
Deliberately manipulative techniques are intentionally designed to influence, alter, or control behaviour. They often exploit cognitive biases, psychological vulnerabilities, or external influences.
For example, these techniques may include sensory manipulation, such as using background sounds or images to induce mood changes (e.g. heightened anxiety or stress), or personalised manipulation, where the AI system crafts highly persuasive messages tailored to an individual based on personal data.
Importantly, this prohibition also applies when AI systems develop manipulative techniques as an unintended result of training on datasets containing examples of manipulation. Therefore, compliance with this prohibition must be considered throughout all phases of AI system development.
Deceptive techniques
In deceptive techniques, an AI system presents false or misleading information to manipulate behaviour — again without the individual’s awareness or ability to resist.
This prohibition applies even if the deception was not intentional on the part of the AI system’s provider or user.
An example: an AI chatbot impersonating a friend or family member — using a cloned voice — to fraudulently solicit money. Another example is the malicious use of deepfake technology.
3. Behavioural distortion as the goal or result
The third condition requires that the technique is intended to, or results in, materially distorting the behaviour of an individual or group. This involves substantial influence that undermines autonomy and free choice — noticeably impairing the ability to make informed decisions.
Examples include:
-
A chatbot using subliminal techniques (e.g. brief visual cues or inaudible sounds) to intentionally distort behaviour.
-
The deliberate use of manipulative techniques to influence consumer purchasing decisions without their awareness.
A distortion resulting from an AI system could occur where, for instance, a chatbot designed to promote healthy habits exploits individual vulnerabilities to encourage harmful behaviours, such as unhealthy habits or risky activities — with significant harm being a likely outcome.
4. (Reasonably foreseeable) significant harm
Finally, the fourth condition requires that the behavioural distortion is reasonably likely to cause significant harm — whether physical, psychological, financial, or economic. In some cases, this could also result in broader societal harm.
Examples include:
-
Physical harm: A chatbot encouraging self-harm or suicide.
-
Financial or economic harm: A chatbot promoting fraudulent products, leading to substantial financial loss.
Such harms can also occur in combination — for example, an AI system causing psychological trauma, stress, or anxiety alongside physical health issues (e.g. insomnia, physical illness). AI systems could also foster addictive behaviours, fear, or depression, further contributing to harm.
Note that this prohibition only applies where significant harm is caused. Determining whether harm is "significant" depends on context and requires a case-by-case assessment. Typically, significant harm includes injury, death, serious health problems, or property destruction.
Factors for this assessment include:
-
The severity of the harm;
-
The specific context in which the AI system was used;
-
The scale of harm and intensity of its effects;
-
The vulnerability of affected individuals;
-
The duration and reversibility of the harm.
For instance, children, the elderly, or individuals with disabilities may be more vulnerable to certain AI-driven harms. Similarly, harm that is long-term or irreversible is more likely to be classified as significant than short-term or reversible effects.
In practice, the threshold for "significant" harm is not excessively high. The original draft of the AI Act used the term "significant" rather than a higher threshold, ensuring that this prohibition captures a broad range of risks.
Overlap with exploitation of vulnerable persons
Article 5(1)(a) of the AI Act overlaps to some extent with Article 5(1)(b). The latter focuses on the protection of vulnerable persons, such as the elderly or persons with disabilities, who are more susceptible to exploitation by AI systems.
While Article 5(1)(a) targets subliminal manipulation (e.g. visual flashes to influence buying behaviour), Article 5(1)(b) addresses active exploitation of vulnerabilities — for example, targeting elderly individuals with insurance offers that take advantage of their reduced cognitive capacity.
Our next blog will explore this second prohibition in more detail.
What if you are using prohibited AI?
It is crucial to promptly assess whether your organisation is involved in prohibited AI practices. Non-compliance can result in substantial fines of up to EUR 35 million or 7% of global annual turnover.
The fact that a designated supervisory authority for these prohibitions has not yet been appointed does not mean that enforcement is impossible. These prohibitions have been directly applicable since 2 February 2025. Violations of the AI Act now constitute a breach of law, and affected individuals or organisations can already bring legal claims.
Take action now: conduct an AI scan!
In short, complying with the AI Act requires a structured approach within your organisation. We are happy to support you in mapping your AI systems, classifying them, and helping you meet your obligations.
We offer this service in partnership with an external expert, using a three-step approach: a knowledge session, an AI scan, and then implementation.
If your organisation would like assistance with this process, the Holla AI team offers an AI scan to help. If you have questions, feel free to contact us. We look forward to supporting you — and keep an eye on our website for the next articles in this series!
Our people
Do you have any questions or would you like to make an appointment?

