AI Act
Artificial Intelligence Act
Overview
The EU AI Act is the world's first comprehensive horizontal regulation on artificial intelligence. Adopted in March 2024 and entering into force on August 1, 2024, it establishes a risk-based framework that categorizes AI systems based on their potential harm and imposes requirements accordingly.[1]
The regulation aims to ensure AI systems placed on the EU market are safe, respect fundamental rights, and foster innovation through legal certainty.
Phased Application Dates
| Date | Milestone |
|---|---|
| August 1, 2024 | AI Act enters into force |
| February 2, 2025 | Prohibited AI practices banned; AI literacy obligations apply |
| August 2, 2025 | GPAI model obligations; Governance rules apply |
| August 2, 2026 | Full application for high-risk AI systems |
| August 2, 2027 | High-risk AI (Annex I) requirements for certain product safety legislation |
Risk Categories
Prohibited AI Practices (Article 5)[2]
The following AI practices are banned from February 2, 2025:
- Subliminal manipulation: AI exploiting vulnerabilities beyond conscious awareness
- Exploitation of vulnerabilities: Targeting age, disability, or socio-economic circumstances
- Social scoring: Classifying people based on behavior leading to detrimental treatment
- Predictive policing: Individual crime prediction based solely on profiling
- Untargeted facial scraping: Creating facial recognition databases from public sources
- Emotion recognition: In workplaces and educational institutions (with exceptions)
- Biometric categorization: Inferring sensitive attributes like race, politics, or religion
- Real-time remote biometric identification: In public spaces for law enforcement (with exceptions)
High-Risk AI Systems (Article 6)[3]
AI systems in these areas are subject to strict requirements:
- Biometric identification and categorization
- Critical infrastructure management (energy, transport, water, gas)
- Education and vocational training (admissions, assessments)
- Employment (recruitment, performance evaluation, termination)
- Essential services access (credit scoring, emergency services)
- Law enforcement (evidence evaluation, risk assessment)
- Migration and border control
- Justice and democratic processes
General Purpose AI (GPAI) Models
Providers of GPAI models must:
- Maintain technical documentation
- Provide information for downstream provider compliance
- Establish copyright compliance policies
- Publish training content summaries
Systemic risk GPAI (models trained with >10^25 FLOPs) have additional obligations including adversarial testing and incident reporting.[4]
Requirements for High-Risk AI
| Requirement | Description |
|---|---|
| Risk Management | Establish, document, and maintain risk management system |
| Data Governance | Ensure training data is relevant, representative, and error-free |
| Technical Documentation | Detailed system documentation before market placement |
| Record-Keeping | Automatic logging of system operations |
| Transparency | Clear user instructions and capability information |
| Human Oversight | Enable human supervision and intervention |
| Accuracy & Robustness | Consistent performance across intended use cases |
| Cybersecurity | Protection against unauthorized access and manipulation |
AI Literacy Obligation (Article 4)
From February 2, 2025, all providers and deployers must ensure:
"Staff and other persons dealing with the operation and use of AI systems on their behalf have a sufficient level of AI literacy."[5]
This applies broadly across all AI use cases, not just high-risk systems.
Penalties
- Prohibited AI practices: Up to €35 million or 7% of global turnover
- High-risk violations: Up to €15 million or 3% of global turnover
- Incorrect information: Up to €7.5 million or 1% of global turnover[6]
SMEs and startups may benefit from proportional caps.
Key Developer Actions
- Classify your AI systems by risk level
- Document AI use cases and intended purposes
- Implement AI literacy programs for all staff
- Assess high-risk obligations if applicable
- Monitor GPAI requirements for foundation model usage
- Prepare for conformity assessments (high-risk systems)