The EU AI Act: What Practitioners Need to Know
Europe's landmark AI regulation introduces risk-based requirements for AI systems deployed to EU users. This overview explains the framework, the key categories, and what teams are watching as provisions enter force.
AI with human touch
20 April 2025
TL;DR
- →The EU AI Act classifies AI systems by risk level, with the strictest rules applying to 'high-risk' categories such as hiring tools, credit scoring, and critical infrastructure
- →General-purpose AI models above certain capability thresholds face additional transparency and evaluation obligations
- →The Act applies to any organisation deploying covered AI systems to EU users, regardless of where it is headquartered
- →Compliance timelines are staggered — some provisions entered force in 2024, others run through 2026
- →Most enterprise legal teams are treating 2025 as a preparation and gap-assessment year before enforcement fully ramps up
The EU AI Act is the world's first comprehensive legal framework for artificial intelligence. Passed by the European Parliament in March 2024 and entering force in August 2024, the Act takes a risk-based approach: the stricter the potential consequences of an AI system, the more stringent the compliance requirements. For organisations deploying AI to users in the European Union, understanding this framework is no longer optional.
The Risk Classification Framework
The Act divides AI systems into four risk categories. Unacceptable-risk systems — such as real-time biometric surveillance in public spaces and social scoring — are prohibited outright. High-risk systems face the most demanding requirements, including mandatory conformity assessments, detailed technical documentation, human oversight mechanisms, and registration in an EU database before deployment.
High-risk applications include AI used in hiring and worker management, credit and insurance decisions, critical infrastructure management, educational assessment, and law enforcement. If your organisation uses AI in any of these domains, the compliance obligations are substantial and non-trivial to implement.
General-Purpose AI Models
A significant addition to the final text covers general-purpose AI (GPAI) models — large foundation models such as those offered by OpenAI, Anthropic, Google, and others. All GPAI models must meet transparency obligations: providers must publish technical documentation, copyright compliance summaries, and training data summaries. Models deemed to pose systemic risk (broadly, those trained with computational resources above 10²⁵ FLOPs) face additional obligations including adversarial testing and incident reporting.
This means the major model providers must engage directly with the EU AI Office, even if their end users are enterprises rather than consumers.
What Organisations Are Doing Now
In practice, most legal and compliance teams are currently focused on three activities: inventorying AI systems in use across the organisation to identify which fall under high-risk categories; assessing gaps between current documentation and Act requirements; and engaging external counsel to interpret provisions that remain ambiguous pending the European Commission's guidance documents.
The Act's staggered timeline gives organisations runway. Prohibited practices rules applied from February 2025. GPAI obligations apply from August 2025. Full high-risk system compliance is required by August 2026. However, teams with complex deployment footprints report that the 2026 deadline is less comfortable than it appears given the documentation and process changes required.
Key Takeaways
- The EU AI Act is now law — it applies to any organisation deploying AI systems to EU users, regardless of headquarters location
- High-risk AI applications in hiring, credit, infrastructure, and law enforcement face the most demanding compliance requirements
- General-purpose model providers above the systemic-risk threshold face additional obligations including adversarial testing and incident reporting
- Compliance timelines run through August 2026, but documentation and process work takes longer than the deadlines imply
- Current best practice is to inventory all AI systems in use, classify each against the risk framework, and begin gap assessments now