News
The AI Act explained: ban on social scoring (4/10)
Published on Jul 15, 2025
Author(s)
Our people
In August 2024, the European AI Regulation, also known as the AI Act, entered into force. While February 2025 may have seemed far away at the time, the first provisions of the AI Act are already in effect. From 2 February 2025, organisations working with AI must ensure so-called ‘AI literacy’. Additionally, certain AI practices have already been banned. Whether used knowingly or unknowingly, prohibited AI practices may now be subject to enforcement. In this blog series, the Holla AI team outlines the rules that apply as of 2 February 2025. The blog below is the fourth in this series and focuses on the prohibition of AI practices involving social scoring based on behaviour or personal characteristics.
What is social scoring and why is it risky?
Social scoring refers to the algorithmic assignment of scores to individuals based on their behaviour (e.g. payment habits or social media activity) or on certain personal traits (e.g. ethnicity or education level), typically with the aim of making decisions based on those scores. For instance, social scores may be used to restrict access to essential services, such as public services, housing, education, employment, or insurance.
A well-known example of social scoring is China’s social credit system. For years, Chinese authorities have been developing a system that monitors and ranks citizens based on social scores. Individuals are awarded positive or negative points based on factors such as income, legal compliance, contractual performance, and general behaviour. These scores can influence access to loans, housing, or government employment.
Such a system is not desirable in the European Union, as it may lead to unacceptable infringements of fundamental rights, including the right to equal treatment, privacy and data protection, and the prohibition of discrimination. Social scoring can result in unfair treatment of individuals or groups, leading to discriminatory and unjust outcomes. It can also promote social control and constant government surveillance, as seen in China.
The ban on social scoring under the AI Act
The AI Act introduces a third prohibition in Article 5(1)(c), specifically targeting social scoring based on behaviour or personal characteristics. More precisely, the use of AI systems to profile, evaluate or classify (groups of) natural persons based on their social behaviour or known, inferred or predicted personal traits is prohibited if it leads to:
-
A detrimental or unfavourable treatment in a social context unrelated to the context in which the data was originally generated or collected; or
-
A detrimental or unfavourable treatment that is unjustified or disproportionate in relation to the behaviour or its severity.
An "unfavourable treatment" means that a person or group is treated worse than others as a result of the score, even if no concrete harm is caused. This could include, for example, individuals being subjected to additional checks based on personal fraud indicators derived from a scoring system. A "detrimental treatment" implies that the person or group actually suffers harm or disadvantage. For instance, exclusion from services or opportunities based on their score.
When is social scoring prohibited? Two scenarios explained
1. In a social context
This scenario refers to situations where the score is based on data collected or generated in a different context than the one in which the evaluation occurs—such as data from a social media account. The AI system uses this data to assign a score, without any clear connection to the purpose of the evaluation or classification. This typically happens against the reasonable expectations of the data subject and may violate European data protection laws.
An example from the guidelines describes a national tax authority using a predictive AI system to screen all tax returns and identify which ones warrant further investigation. While the system may use relevant variables such as income, assets, or family status, it may also use irrelevant data, such as social habits, internet use or social media activity.
Data may also lead to indirect discrimination. One example is the Dutch Education Executive Agency (“DUO”), which used an algorithm to verify whether students who claimed housing benefits were truly living away from home. Students with a migration background were disproportionately flagged, as they tend to live closer to their parents. This led to increased scrutiny and indirect discrimination. Furthermore, the relevance of the data (i.e. proximity to parents) in the context in which it was collected is questionable.
2. Unjustified or disproportionate
The impact of a social score on a person’s fundamental rights must be weighed against the seriousness of the behaviour in question. Each case must be assessed to determine whether the treatment is proportionate to the goal pursued. Treatment is considered unjustified or disproportionate if there is no legitimate purpose underpinning the use of the social score.
An example would be a municipality using an AI system to assess the reliability of residents based on data about their social behaviour. Residents deemed "less reliable" are placed on a blacklist, resulting in the revocation of permits or increased oversight. Factors considered might include failure to volunteer, minor offences such as overdue library books, late municipal tax payments, or failure to separate household waste. In this case, the scoring practice is not proportionate to the underlying behaviour and resembles China’s social credit system.
Exceptions: permissible forms of scoring
The AI Act’s ban applies to scoring practices involving natural persons. This means that scoring systems targeting legal entities generally fall outside the scope of the ban, unless the evaluation is based on personal characteristics of, for example, directors. The system may also fall under the ban if it directly affects employees or customers of the legal entity.
Scoring systems based solely on objective values and forecasts, without evaluating or classifying individuals based on irrelevant data, are permitted. For example, a system used by a lender to assess a customer’s creditworthiness or to identify outstanding debts is generally allowed. These scores are based on data such as income, expenses, and other financial circumstances, relevant data for a legitimate purpose.
There are other examples of permissible scoring practices. In short, the prohibition is not absolute. The ban only applies if the specific criteria of Article 5(1)(c) of the AI Act are met. If a scoring system uses only relevant data for a legitimate purpose (such as fraud detection) and complies with other applicable laws, it falls outside the prohibition. However, caution is warranted, indirect effects, as previously described, may still bring the system within the scope of the ban.
What if you are using prohibited AI? Fines and liability
It is essential to determine as soon as possible whether any prohibited AI practices are taking place within your organisation. If so, this may result in substantial fines of up to EUR 35 million or 7% of global annual turnover. The fact that no supervisory authority has yet been appointed for these AI prohibitions does not mean enforcement is not possible. The prohibitions have been directly applicable since 2 February 2025, and any violation constitutes a wrongful act. Citizens or companies suffering damage as a result may already initiate legal proceedings.
Need help? Have your AI systems scanned!
In short, compliance with the AI Act requires a structured approach within your organisation. We are happy to help you identify and classify your AI systems and implement the required obligations. We do this in collaboration with an external partner in three steps: a knowledge session, an AI scan, and implementation.
If your organisation needs assistance with these steps, the Holla AI team offers the AI scan to support you.
If you have any questions, please contact us. We’ll be happy to help, and stay tuned to our website for the next articles in this blog series.
Our people
Do you have any questions or would you like to make an appointment?