News

The AI Act explained: ban on emotion recognition by employers and educational institutions (7/10)

Published on Jul 16, 2025

Author(s)

ban on emotion recognition by employers and educational institutions

In August 2024, the European AI Regulation, also known as the AI Act, entered into force. While February 2025 may have seemed far away at the time, the first provisions of the AI Act are already in effect. From 2 February 2025, organisations working with AI must ensure so-called ‘AI literacy’. In addition, certain AI practices have already been prohibited as of that date. If prohibited AI is used, either knowingly or unknowingly, it may now be subject to enforcement. In this blog series, the Holla AI team explains the rules that apply as of 2 February 2025. The blog below is the seventh in the series and focuses on the ban on emotion recognition AI systems in the workplace and in educational settings.

What is emotion recognition?

For businesses, understanding the emotions of customers or employees can be valuable. It may help to monitor customer satisfaction or tailor offers based on the customer’s emotional state. This has led developers of AI systems to create technologies that can detect emotions based on facial expressions, vocal tones, or other non-verbal signals.

For instance, the American company Affectiva has developed a system that measures the emotional response of viewers or listeners to films or music. Filmmakers and musicians can then use this feedback to improve their work based on the emotions they want to evoke. However, experts continue to cast serious doubts on the reliability and effectiveness of such systems.

Use in the workplace

Employers may also be interested in recognising the emotions of their employees, to prevent burnout, measure job satisfaction, or monitor appropriate behaviour during customer interactions. Some companies are already experimenting with such systems in the workplace. For example, Amazon uses a similar system in its call centres, not only to measure customer satisfaction but also to monitor employees’ emotional tone, such as whether they sound friendly enough on the phone.

What exactly does the AI Act prohibit?

The AI Act introduces a strict prohibition on the use of emotion recognition AI systems in workplaces and educational institutions, as set out in Article 5(1)(f). The reasoning is twofold: firstly, the reliability of such systems remains highly questionable; and secondly, they may lead to discrimination or other undesirable forms of differentiation. The power imbalance between employer and employee, or between educator and student, also plays a significant role. To protect employees and students, the AI Act prohibits these systems.

This prohibition applies when all of the following conditions are met:

  • The AI system is placed on the market, put into service, or used;

  • The system is specifically intended for emotion recognition, or is operationally capable of doing so;

  • The system has the technical ability to detect or infer emotions based on biometric data;

  • The system is used in the workplace or within educational institutions.

Distinction Between Emotions and Physical Conditions

To fall under the prohibition, the AI system must be capable of inferring or detecting emotions or intentions using biometric data, such as a person’s face or voice. Recognised emotions and intentions may include: happiness, sadness, anger, surprise, disgust, embarrassment, excitement, shame, contempt, contentment, and pleasure. Facial expressions or gestures used to identify these emotions, such as frowning, smiling, head movements or raised voice, also fall under this scope.

However, physical conditions such as pain or fatigue are not covered. This is partly because systems used for medical or safety purposes are exempt from this ban. For instance, AI tools that monitor fatigue in pilots or professional drivers to prevent accidents are not prohibited.

Broad Definition of “Workplace”

The term “workplace” must be interpreted broadly. It refers to any physical or virtual space where natural persons perform tasks or duties assigned by their employer or organisation (including freelancers). In other words, any environment where work is carried out, such as offices, factories, warehouses, and even publicly accessible spaces like shops, museums, and mobile work locations.

This definition is independent of employment status. It applies to employees, contractors, interns, and volunteers. The prohibition even applies during recruitment and selection processes. However, the scope of “workplace” does not seem to extend to emotion recognition directed at customers or patients in these environments. That said, this does not mean AI systems used to recognise customer or patient emotions are automatically allowed. From 2 August 2026, such systems are likely to be subject to extensive obligations, as they will generally be considered high-risk systems.

A Fine Line Between Emotion Recognition and Sentiment Analysis

Finally, it may be difficult to distinguish whether an AI system is performing emotion recognition or sentiment analysis. Emotion recognition detects specific, expressed emotions (e.g. anger, fear, or sadness), whereas sentiment analysis determines a general attitude or tone (e.g. positive, neutral, or negative). Emotion recognition typically relies on biometric data, while sentiment analysis is usually based on text, such as social media posts, reviews, or customer feedback.
It is therefore advisable to examine AI systems carefully to ensure they do not fall under a prohibited practice.

What if you are using prohibited AI?

It is important to assess, without delay, whether your organisation is deploying prohibited AI practices. If such practices are found, they may be subject to significant fines of up to EUR 35 million or 7% of global annual turnover.

The fact that no supervisory authority has yet been appointed for these banned practices does not mean enforcement is not possible. The prohibitions have been directly applicable since 2 February 2025, and any violation constitutes a wrongful act. Individuals or companies who suffer harm may already initiate legal proceedings.

Take action now: conduct the AI Scan!

In short, complying with the AI Act requires a structured approach within your organisation. We’re here to help you map your AI systems, classify them, and implement the necessary legal obligations.
We do this together with an external partner in three steps: a knowledge session, an AI scan, and implementation.

If your organisation needs support with these steps, the Holla AI team offers a dedicated AI compliance scan.
If you have any questions, please contact us. We are happy to assist, and stay tuned to our website for more articles in this blog series.

Our people

Do you have any questions or would you like to make an appointment?