Diese Website verwendet Cookies, damit wir dir die bestmögliche Benutzererfahrung bieten können. Cookie-Informationen werden in deinem Browser gespeichert und führen Funktionen aus, wie das Wiedererkennen von dir, wenn du auf unsere Website zurückkehrst, und hilft unserem Team zu verstehen, welche Abschnitte der Website für dich am interessantesten und nützlichsten sind.
AI Act Bans Harmful AI Systems – But What Exactly Does This Mean?
Following my previous post on prohibited AI practices under the EU Artificial Intelligence Act, Leon Loevenich and I have taken a closer look at the first of eight AI bans that are taking effect as of early 2025. Our series of articles is supposed to provide an overview of the scope of the AI bans, their potential impacts on businesses, and will highlights legal risks associated with this newly introduced norm.
1. Overview:
Art. 5, paragraph 1, lit. a AI Act aims to prohibit the use of AI systems that employ subliminal or intentionally manipulative or deceptive techniques where those techniques have the objective, or the effect of materially distorting the behavior of a person or persons resulting in that person or persons taking a decision that they would not otherwise have taken. AI systems are banned if their objective or effect causes significant harm to a person or a group of persons. While it is obviously a reasonable goal to take action against AI systems that are designed so as to interfere with the ability to make informed decisions, and that work to the detriment of users or society as a whole, it is questionable whether the result of long-lasting negotiations on Art. 5 para. 1 lit. a AI Act will achieve this goal. The verbiage leaves much room for interpretation, and Art. 5 para. 1 lit. a AI Act may both fail to prohibit harmful AI systems and introduce a complete ban on others that should rather be regulated by other means.
2. Vagueness of Terms:
The terms used in Art. 5 para. 1 lit. a AI Act lack clarity, which may stem from the expedited drafting process of the AI Act. There is consensus that political pressures preceding the European Parliament election influenced its timing. Amongst others, there is ambiguity in the following core terms:
a. Person: The term “person” in Art. 5 para. 1 lit. a AI Act raises ambiguity. It remains unclear whether “person” exclusively refers to natural persons. While it appears clear that only natural persons can be influenced by subliminal techniques, a legal person could still be subject to manipulation. Also, other paragraphs of Art. 5 AI Act explicitly refer to “natural persons” so that, from a systematic point of view, it would be evident that the two terms should be interpreted differently. If, however, Art. 5 para. 1 lit. a AI Act would include the influence exerted on legal persons, this would imply a significantly broader scope of the AI ban.
Furthermore, the term “group of persons” lacks specificity as neither the norm nor Recital 29 defines or explains its scope. A group would normally be understood as a smaller or larger number of individuals. However, it cannot be interpreted as society as a whole (with the entire population forming a “group”). In contrast to the above, this represents an important limitation of the scope, as societal harm alone would not be sufficient to prohibit an AI system.
If, for instance, an AI system used subliminal techniques during an election campaign that caused many people to make a decision at the polls that they would not have otherwise made, this would certainly be harmful to democratic society, and such an AI system might well be worth banning. However, Art. 5 para. 1 lit. a AI Act implies that, in order to ban such an AI system, it would be necessary to prove that the manipulated decision caused harm to an individual or a specific group of persons – a high burden of proof that would be required against AI-based internet trolls disseminating fake news…
b. Significant Harm: Similarly, “significant harm” lacks a precise definition. While Recital 29 outlines adverse impacts on physical, psychological, and financial interests, it does not exhaustively list all possible scenarios. Questions remain regarding whether using AI systems for activities like voter fraud constitutes “significant harm” under Art. 5 para. 1 lit. a AI Act. This becomes important, in particular, when taking into account that “harm” must be proven at an individual level rather than at societal level as explained above.
It is also important to note that the English version of the AI Act uses the term “harm” as opposed to the German wording “Schaden” that implies the need for “damage”. In other legislative acts, such as the GDPR, the English wording is usually “damage”. It will be interesting to see how the courts will interpret this different verbiage, and whether “harm” implies a lesser level of adverse impairment than “damage”.
Based on the above, the introduction of Art. 5 para. 1 lit. a demands careful consideration for businesses. For example, the use of AI in the online gambling industry might not seem obvious on the first glance. Nevertheless, it may that some players in the industry use specific AI systems to induce the gambler to keep gambling. AI systems could be tailored to analyze player data and behavior patterns in a manipulative manner, resulting in prolonged player engagement and increased spending which could cause significant financial harm. Such type of AI system would most likely be prohibited under the AI Act.
c. Subliminal Techniques. Another key aspect of Art. 5 para. 1 lit. a of the AI Act is the understanding of “subliminal techniques”. While the term usually refers to subtle and unconscious influence by undetectable stimuli through images, sounds or texts, it does not seem to be excluded to consider the undetectable impacts of biased AI systems a subliminal distortion leading to decisions that the individual would not have otherwise taken. This understanding could significantly broaden the scope of Art. 5 para. 1 lit. a AI Act.
With that understanding, e.g. the use of AI systems in the hiring process needs to be carefully analyzed. While it presents big opportunities for the streamlining of the recruitment process, it comes with the risk of a potential bias in the data or the AI system. This could result in discrimination while undermining the transparency of the hiring process. Not being considered for a job would certainly be harmful for the rejected person.
From a systematic point of view and, in particular, in light of the other provisions of the AI Act, it is unlikely that courts would follow this argument. Still, it highlights the weaknesses of Art. 5 para. 1 lit. a AI Act with its many unclear and undefined terms.
3. Impact on Businesses:
Compliance will require businesses to introduce robust measures to ensure AI systems do not inadvertently breach the regulation set forth in Art. 5 para. 1 lit. a AI Act, potentially leading to legal consequences. Until the courts have further contoured the provision, a lot of uncertainty remains as regards its scope and business impact. This affects AI developers as it affects the users of AI systems.
Core weakness of the provision is the uncertainty around many of the terms that define the scope:
- It remains unclear whether legal persons are a possible subject of manipulation.
- Societal harm may not suffice to trigger the ban for certain AI systems (e.g. in case of voter manipulation).
- Subliminal techniques may include distortion caused by biased AI systems which could significantly augment the scope and lead to the unintended prohibition of AI systems.
At GreenGate Partners, we are committed to providing updates on the AI Act, detailing its regulatory framework and its implications for businesses. Be prepared for more insights on the Regulation, as we come closer to the implantation of its norms. And contact as for navigation advice for your company’s compliance with the new law.