AI Act

Artificial Intelligence Act

Overview

The EU AI Act is the world's first comprehensive horizontal regulation on artificial intelligence. Adopted in March 2024 and entering into force on August 1, 2024, it establishes a risk-based framework that categorizes AI systems based on their potential harm and imposes requirements accordingly.[1]

The regulation aims to ensure AI systems placed on the EU market are safe, respect fundamental rights, and foster innovation through legal certainty.

Phased Application Dates

DateMilestone
August 1, 2024AI Act enters into force
February 2, 2025Prohibited AI practices banned; AI literacy obligations apply
August 2, 2025GPAI model obligations; Governance rules apply
August 2, 2026Full application for high-risk AI systems
August 2, 2027High-risk AI (Annex I) requirements for certain product safety legislation

Risk Categories

Prohibited AI Practices (Article 5)[2]

The following AI practices are banned from February 2, 2025:

  • Subliminal manipulation: AI exploiting vulnerabilities beyond conscious awareness
  • Exploitation of vulnerabilities: Targeting age, disability, or socio-economic circumstances
  • Social scoring: Classifying people based on behavior leading to detrimental treatment
  • Predictive policing: Individual crime prediction based solely on profiling
  • Untargeted facial scraping: Creating facial recognition databases from public sources
  • Emotion recognition: In workplaces and educational institutions (with exceptions)
  • Biometric categorization: Inferring sensitive attributes like race, politics, or religion
  • Real-time remote biometric identification: In public spaces for law enforcement (with exceptions)

High-Risk AI Systems (Article 6)[3]

AI systems in these areas are subject to strict requirements:

  • Biometric identification and categorization
  • Critical infrastructure management (energy, transport, water, gas)
  • Education and vocational training (admissions, assessments)
  • Employment (recruitment, performance evaluation, termination)
  • Essential services access (credit scoring, emergency services)
  • Law enforcement (evidence evaluation, risk assessment)
  • Migration and border control
  • Justice and democratic processes

General Purpose AI (GPAI) Models

Providers of GPAI models must:

  • Maintain technical documentation
  • Provide information for downstream provider compliance
  • Establish copyright compliance policies
  • Publish training content summaries

Systemic risk GPAI (models trained with >10^25 FLOPs) have additional obligations including adversarial testing and incident reporting.[4]

Requirements for High-Risk AI

RequirementDescription
Risk ManagementEstablish, document, and maintain risk management system
Data GovernanceEnsure training data is relevant, representative, and error-free
Technical DocumentationDetailed system documentation before market placement
Record-KeepingAutomatic logging of system operations
TransparencyClear user instructions and capability information
Human OversightEnable human supervision and intervention
Accuracy & RobustnessConsistent performance across intended use cases
CybersecurityProtection against unauthorized access and manipulation

AI Literacy Obligation (Article 4)

From February 2, 2025, all providers and deployers must ensure:

"Staff and other persons dealing with the operation and use of AI systems on their behalf have a sufficient level of AI literacy."[5]

This applies broadly across all AI use cases, not just high-risk systems.

Penalties

  • Prohibited AI practices: Up to €35 million or 7% of global turnover
  • High-risk violations: Up to €15 million or 3% of global turnover
  • Incorrect information: Up to €7.5 million or 1% of global turnover[6]

SMEs and startups may benefit from proportional caps.

Key Developer Actions

  1. Classify your AI systems by risk level
  2. Document AI use cases and intended purposes
  3. Implement AI literacy programs for all staff
  4. Assess high-risk obligations if applicable
  5. Monitor GPAI requirements for foundation model usage
  6. Prepare for conformity assessments (high-risk systems)

Sources & References

[1]
Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence. EUR-Lex: AI Act Official Text
[2]
AI Act Article 5: Prohibited AI Practices. AI Act Explorer: Article 5
[3]
AI Act Article 6: Classification rules for high-risk AI systems. AI Act Explorer: Article 6
[4]
General Purpose AI model obligations. European Commission: GPAI Obligations
[5]
AI Act Article 4: AI Literacy. AI Act Explorer: Article 4
[6]
AI Act Article 99: Penalties. AI Act Explorer: Article 99