EU Artificial Intelligence Act (AI Act)
The world's first comprehensive AI regulation, establishing a risk-based framework for the development, deployment, and use of artificial intelligence systems within the EU.
Overview
The EU Artificial Intelligence Act (Regulation 2024/1689) was adopted on 13 June 2024 and published in the Official Journal on 12 July 2024. It entered into force on 1 August 2024, with a phased implementation timeline extending through August 2027. It is the world's first comprehensive legal framework governing artificial intelligence.
The AI Act takes a risk-based approach, categorising AI systems into four risk levels. Unacceptable risk AI systems are prohibited outright: these include social scoring by public authorities, real-time remote biometric identification in publicly accessible spaces (with narrow exceptions for law enforcement), emotion recognition in workplaces and educational institutions, and AI systems that exploit vulnerabilities of specific groups.
High-risk AI systems — used in critical infrastructure, education, employment, essential services, law enforcement, migration, and the administration of justice — must meet stringent requirements before being placed on the EU market. These include risk management systems, data governance, technical documentation, record-keeping, transparency and information to deployers, human oversight, accuracy, robustness, and cybersecurity.
General-purpose AI (GPAI) models face specific transparency obligations, including publishing training content summaries, complying with EU copyright law, and publishing a sufficiently detailed summary of training data. GPAI models with systemic risk (identified by computing power thresholds of 10^25 FLOPs or by Commission designation) face additional obligations: model evaluation, adversarial testing, incident tracking and reporting to the AI Office, and adequate cybersecurity.
Limited-risk AI systems (such as chatbots and deepfake generators) must meet transparency requirements, ensuring users know they are interacting with AI or viewing AI-generated content. Minimal-risk AI systems (like spam filters and AI-enabled video games) may be used freely without additional obligations.
Implementation follows a staggered timeline: prohibited practices provisions apply from February 2025; GPAI model and governance rules from August 2025; most high-risk system obligations from August 2026; and certain high-risk categories integrated into existing sectoral legislation from August 2027. The European AI Office coordinates enforcement, supported by national market surveillance authorities.
Key Articles & Provisions
Prohibited AI practices
Bans social scoring, manipulative subliminal techniques, exploitation of vulnerabilities, real-time remote biometric identification in public spaces (with limited law enforcement exceptions), and workplace/education emotion recognition.
Classification rules for high-risk AI systems
Defines high-risk AI by reference to Annex III categories (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice) and products regulated under EU harmonised legislation.
Requirements for high-risk AI systems
Mandates risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity for high-risk systems.
Transparency obligations
Requires providers to ensure AI systems interacting with persons disclose their AI nature. Deployers of deepfakes and AI-generated text on public interest matters must label content as AI-generated.
General-purpose AI models
GPAI providers must maintain technical documentation, comply with copyright law, and publish training data summaries. Systemic risk models (10^25+ FLOPs) face additional evaluation, testing, and reporting duties.
Penalties
Tiered penalty structure: up to €35M or 7% of global turnover for prohibited practice violations; up to €15M or 3% for other AI Act violations; up to €7.5M or 1% for supplying incorrect information.
Penalties & Enforcement
€35,000,000
7% of total annual worldwide turnover
Enforcement Examples
- •Violations of prohibited AI practices: up to €35M or 7% of global annual turnover
- •Non-compliance with high-risk AI requirements: up to €15M or 3% of global annual turnover
- •Supplying incorrect, incomplete, or misleading information: up to €7.5M or 1% of global annual turnover
- •SMEs and startups benefit from proportionate caps and regulatory sandboxes to reduce compliance burden
Check Your Compliance Status
Take our free assessment to evaluate your organisation's compliance posture. Get a personalised report with actionable recommendations in minutes — no sign-up required.
Start Free AssessmentDisclaimer: The information on this page is for educational purposes and does not constitute legal advice. For specific compliance guidance, consult a qualified legal professional in your jurisdiction.
EU Artificial Intelligence Act (AI Act) by Country
Explore how EU Artificial Intelligence Act is implemented and enforced in each EU member state.
EU Artificial Intelligence Act (AI Act) by Industry
See industry-specific requirements and guidance for EU Artificial Intelligence Act.