Article 5 and the EU AI Act’s Absolute Red Lines – FireTail Blog
嗯,用户让我帮忙总结一下这篇文章的内容,控制在一百个字以内。首先,我需要通读整篇文章,抓住主要信息。 文章主要讲的是欧盟AI法案的第5条,已经于2025年2月生效。里面提到了八项禁止的行为,比如利用潜意识技术、剥削弱势群体、社会评分、预测性警务等。这些违规行为的罚款高达3500万欧元或全球年营业额的7%。文章还提到合规的重要性以及监管机构已经开始执行这些规定。 接下来,我需要将这些信息浓缩到100字以内。要确保涵盖关键点:第5条生效时间、八项禁止行为、罚款数额、合规挑战以及监管执行情况。 可能会遇到的问题是如何在有限的字数内准确传达所有重要信息而不遗漏关键点。因此,我需要选择最核心的内容进行总结。 最后,检查字数是否符合要求,并确保语言简洁明了。 </think> 欧盟AI法案第5条已于2025年2月生效,禁止八类高风险AI应用,包括潜意识操控、剥削弱势群体和社会评分等。违规罚款高达3500万欧元或全球营业额7%。合规要求严格,监管已开始执行。 2026-4-20 17:47:44 Author: securityboulevard.com(查看原文) 阅读量:22 收藏

The post Article 5 and the EU AI Act's Absolute Red Lines – FireTail Blog appeared first on FireTail – AI and API Security Blog.

Apr 20, 2026 – Alan Fagan – Most conversations about the EU AI Act focus on August 2026, when obligations for high-risk AI systems become fully enforceable. But Article 5 is already live. The Act’s eight prohibited practices became enforceable in February 2025. Fines of up to €35 million or 7% of global annual turnover apply now. And the infrastructure to act on violations is in place.
For AI providers operating in or serving the EU market, understanding Article 5 is critical.
The EU AI Act takes a risk-based approach to AI governance. The practices represent the EU’s judgement that certain applications of AI are incompatible with fundamental rights and democratic values, and the European Commission reinforced that position in the guidelines it published on 4 February 2025, two days after the prohibitions.
The guidelines break each prohibition into cumulative conditions and provide practical examples of what falls in scope and what does not. They are the clearest signal available of how regulators will interpret borderline cases.
The penalty structure reflects the seriousness with which the EU treats these provisions. At up to €35 million or 7% of global annual turnover, violations of Article 5 carry steeper fines than any other category of non-compliance in the Act.
The Eight Prohibitions
1. Subliminal and Manipulative Techniques
AI systems that deploy techniques operating below conscious awareness, or that exploit psychological vulnerabilities, biases, or weaknesses in decision-making to distort behaviour and cause significant harm, are banned.
The prohibition is targeted at systems designed to circumvent rational agency. It does not cover normal personalisation, recommendation engines, or advertising that simply presents persuasive content. The key conditions are that the technique must be subliminal or manipulative, and that it must cause or be reasonably likely to cause significant harm.
In practice, the compliance question for providers is whether their optimisation objectives could drive the system toward manipulative behaviour as a side effect. A recommender system trained purely on engagement maximisation can, over time, evolve into something that exploits psychological patterns in ways that meet the prohibition’s conditions. 2. Exploiting Vulnerabilities
AI systems that exploit vulnerabilities arising from a person’s age, disability, or socioeconomic circumstances to distort behaviour in ways that cause harm are banned.
The practical example that clarifies this prohibition is an AI advertising tool that identifies users showing signs of financial hardship, through search behaviour, location data, or device signals, and targets them with offers specifically designed to exploit that vulnerability. The Commission’s guidelines explicitly name this kind of system as a violation.
This prohibition has direct implications for any AI system operating in consumer finance, healthcare, or social services, where users may be in vulnerable circumstances by definition. The question is not whether the system serves those users, but whether it is designed to exploit their circumstances rather than serve their interests.
3. Social Scoring
General-purpose social scoring of individuals or groups based on social behaviour or personal characteristics, leading to detrimental treatment in contexts unrelated to where the data was collected, is banned when conducted by or on behalf of public authorities.
This is the provision most directly aimed at preventing the kind of surveillance infrastructure that has emerged in certain authoritarian contexts. It applies to public authorities, but it also catches systems that aggregate data across domains in ways that create de facto social profiles affecting access to services, employment, or civic participation.
4. Predictive Policing Based on Profiling
AI systems that assess the likelihood of an individual committing a criminal offence solely on the basis of profiling or personality traits, absent objective and verifiable facts directly linked to criminal activity, are prohibited.
A retail security system that analyses CCTV footage to detect actual suspicious behaviour, such as someone concealing merchandise, is permitted because it reacts to observable actions. A system that flags customers as high risk based on demographic profiling, is not.
5. Untargeted Facial Recognition Scraping
Building or expanding facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage is banned absolutely.
This provision addresses the data acquisition practices used by a number of controversial biometric surveillance providers in recent years. Several of these companies built large-scale facial recognition datasets by scraping billions of images from social media platforms and public web sources without consent. That practice is now illegal in the EU.
6. Emotion Inference in Workplaces and Educational Settings
A range of specialist vendors such as IBM, Microsoft, and Amazon have offered emotion detection capabilities through their cloud platforms and APIs. The global emotion AI market was valued at approximately $7.5 billion in 2024. Many of these tools were being actively evaluated or deployed in employee monitoring, productivity assessment, and remote meeting analysis contexts.
Since February 2025, deploying AI systems that infer the emotional states of individuals in workplaces or educational environments is prohibited in the EU. However, context is determinative. The same AI capability can be permitted in one setting and prohibited in another. Affect recognition technology used for driver safety monitoring in an automotive context has a different regulatory status from the identical technology embedded in an employer’s video call analysis platform.
7. Biometric Categorisation by Sensitive Characteristics
AI systems that use biometric data to categorise individuals based on race, political opinions, religious or philosophical beliefs, sex life, or sexual orientation are prohibited.
The narrow exceptions cover the labelling or filtering of biometric datasets that are lawfully acquired, and law enforcement categorisation under strictly controlled conditions.
This prohibition catches systems that providers may not have characterised as biometric categorisation in their original design. Any model that takes facial, voice, or physiological inputs and produces outputs that correlate needs to be assessed carefully against this provision, regardless of the stated purpose.
8. Real-Time Remote Biometric Identification in Public Spaces
The real-time use of remote biometric identification systems in public spaces for law enforcement purposes is prohibited, with narrow exceptions.
Deployment requires a prior fundamental rights impact assessment under Article 27, judicial or independent administrative authorisation before use, and registration in the EU database. In genuine emergencies, use can begin before registration, but registration must follow immediately and the relevant authority must be notified.
This prohibition does not apply to private actors in non-law-enforcement contexts, but it sets a clear precedent for the EU’s approach to real-time biometric surveillance in public life.
The Compliance Challenge
Understanding the prohibitions is only the first step. The challenge for providers is ensuring that their systems do not violate prohibitions through optimisation, fine-tuning, or integration with other services.
The European Commission states that deployers bear responsibility for how they use systems, regardless of what the provider’s terms of service say. But the design, training, and integration choices that providers make set the boundaries within which deployers operate. Providers who build systems capable of prohibited practices, even if they prohibit those uses, are not fully insulated from regulatory attention if those capabilities are reasonably foreseeable.
Developers need to monitor how systems actually behave in deployment, not just design intent. The Enforcement Reality
Prohibited practices under Article 5 of the AI Act became enforceable on 2 August 2025. No formal enforcement actions have been publicly announced to date, but the architecture is in place and complaints from affected individuals or organisations can trigger investigations at any time.
The enforcement landscape varies by member state. Ireland’s proposed implementation assigns prohibited practice enforcement to the Central Bank for financial services, the Workplace Relations Commission for employment contexts, and the Data Protection Commission for others. This means a single organisation with AI systems operating across multiple domains could face scrutiny from more than one authority simultaneously.
What This Means for AI Providers
Article 5 compliance requires ongoing technical visibility into how your systems behave, what data they process, and what outputs they produce. FireTail gives AI providers continuous monitoring and visibility across their deployed systems, capturing the inputs and outputs that compliance evidence requires, detecting patterns that approach prohibited practice thresholds, and generating the audit trail. When the enforcement window closes, that evidence is what separates organisations that were prepared from those that were not.
The prohibited practices provisions are live. The enforcement infrastructure is in place. The guidelines from the Commission have clarified how regulators will interpret the boundaries. The time to build the technical controls that demonstrate compliance is now.

*** This is a Security Bloggers Network syndicated blog from FireTail - AI and API Security Blog authored by FireTail - AI and API Security Blog. Read the original post at: https://www.firetail.ai/blog/article-5-and-the-eu-ai-acts-absolute-red-lines


文章来源: https://securityboulevard.com/2026/04/article-5-and-the-eu-ai-acts-absolute-red-lines-firetail-blog/
如有侵权请联系:admin#unsafe.sh