EU Officially Bans AI Systems with ‘Unacceptable Risk’ Under AI Act

The European Union has officially started enforcing its AI Act, marking a major milestone in global AI regulation. As of February 2, 2025, AI systems deemed to pose “unacceptable risk” are now outright banned across the bloc. This marks the first compliance deadline under the AI Act, which was approved in March 2024 and went into force on August 1, 2024.

With this deadline, AI developers, tech companies, and organizations must ensure compliance or face severe penalties. The bans target AI applications that threaten fundamental rights, manipulate human behavior, or exploit vulnerabilities.

How the EU AI Act Categorizes AI Risk

The AI Act classifies AI systems into four risk categories, each with varying levels of regulatory scrutiny:

  1. Minimal Risk: Systems like email spam filters require no regulatory oversight.
  2. Limited Risk: Applications such as customer service chatbots are subject to light regulatory requirements, mainly transparency obligations.
  3. High Risk: AI systems in critical sectors, like healthcare, recruitment, and law enforcement, face stringent oversight, requiring documentation, risk assessments, and compliance checks.
  4. Unacceptable Risk: AI applications in this category are now banned outright due to their potential to cause significant societal harm.

Which AI Applications Are Now Banned in the EU?

According to Article 5 of the AI Act, the following AI applications are now illegal across the European Union:

  • Social Scoring AI: Systems that create risk profiles based on individuals’ behavior, similar to China’s social credit system.
  • Deceptive Manipulation AI: AI that subtly influences users’ decisions without their awareness.
  • Exploitative AI: AI that targets vulnerable groups, including children, the elderly, or individuals with disabilities.
  • Crime Prediction AI: AI systems that predict criminal behavior based on biometric or appearance-based profiling.
  • Biometric-Based Character Inference: AI that tries to determine personal characteristics like sexual orientation or political beliefs through biometrics.
  • Real-Time Biometric Surveillance: AI that collects live facial recognition data in public spaces for law enforcement purposes.
  • Emotion Recognition AI in Work & Schools: AI that analyzes emotions in professional or educational settings.
  • Unauthorized Facial Recognition Databases: AI that scrapes images online or from security cameras to expand facial recognition databases.

Harsh Penalties for Non-Compliance

Companies that fail to comply with these bans will face significant financial penalties, even if they are headquartered outside the EU. The AI Act outlines two key penalties:

  • Fines of up to €35 million (~$36 million) or
  • 7% of the company’s annual global revenue (whichever is greater).

These fines will be enforced starting August 2025, once the EU finalizes competent authorities responsible for oversight.

Voluntary Compliance & Industry Reactions

While February 2 marks the first major compliance deadline, many AI firms proactively committed to EU standards months in advance.

In September 2024, over 100 companies signed the EU AI Pact, a voluntary agreement to begin implementing AI Act principles before the enforcement date. Key signatories include:

  • Amazon
  • Google
  • OpenAI

However, Meta, Apple, and French AI startup Mistral notably declined to sign the pact. Mistral, in particular, has been vocal in criticizing the AI Act, arguing that it stifles innovation and favors big tech companies.

Despite this, legal experts believe most major companies will still comply, as the banned AI applications are largely outside the scope of mainstream commercial AI products.

Are There Any Exceptions?

Despite the strict prohibitions, the AI Act allows narrow exceptions in certain circumstances:

  • Law Enforcement Use of Biometric AI: Authorities can use biometric AI in public spaces if they are conducting a “targeted search” (e.g., finding an abducted child) or preventing an “imminent” threat to life. However, this requires official approval.
  • Emotion Recognition AI for Medical/Safety Uses: AI that detects emotions may still be used in workplaces and schools if there is a legitimate medical or safety justification, such as therapeutic applications.

The European Commission plans to release further guidelines on these exceptions in early 2025, but as of now, no additional details have been published.

Future Challenges & Unanswered Questions

Legal experts warn that the AI Act does not exist in isolation and could face conflicts with other EU regulations like:

  • GDPR (General Data Protection Regulation)
  • NIS2 Directive (Cybersecurity rules)
  • DORA (Digital Operational Resilience Act for financial services)

The overlapping compliance obligations could create legal confusion, particularly regarding data privacy, security reporting, and AI governance.

Conclusion: A Global Precedent in AI Regulation

The EU AI Act is one of the strictest AI regulatory frameworks worldwide, setting a global precedent for AI governance. While companies have six more months before full enforcement kicks in, this first compliance deadline signals a new era of AI accountability.

As global discussions on AI regulation intensify, it remains to be seen whether other regions, such as the U.S., UK, and China, will adopt similar AI risk-based approaches.

For now, one thing is clear: AI developers operating in the EU must tread carefully—or face severe financial consequences.

Share this 🚀