Guide
The EU AI Act: A Complete Guide for Lawyers
Risk categories, timelines, penalties, and what it means for legal practice
Why the AI Act Matters for Lawyers
The EU AI Act (Regulation 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. It entered into force on August 1, 2024, and its obligations are rolling out in phases through 2027. For anyone working in EU or UK-adjacent law, understanding this regulation is no longer optional. It touches employment law, financial services, healthcare, criminal justice, and more. It also directly governs how AI tools can be used in legal work itself.
The AI Act is not a technology regulation in the abstract. It creates hard legal obligations for businesses that develop or deploy AI systems in the EU, with enforcement teeth comparable to the GDPR. Companies that get this wrong face fines up to 35 million EUR or 7% of global annual turnover, whichever is higher.
For lawyers, the Act matters on two levels. First, your clients need advice on compliance. Second, AI systems used in legal proceedings, including tools for case outcome prediction or evidence analysis, are classified as high-risk. That classification affects how law firms and courts can adopt AI internally.
The Risk-Based Framework: Four Tiers
The Act organises AI systems into four risk categories. Your obligations depend entirely on which tier your system falls into.
Unacceptable Risk (Prohibited)
These systems are banned outright. No compliant use case exists for them under EU law. The prohibition has been in force since February 2, 2025.
High Risk
These systems are permitted but subject to extensive pre-market and ongoing obligations. They must pass a conformity assessment before deployment. The high-risk obligations kick in from August 2, 2026 for most systems.
Limited Risk
These systems face lighter transparency obligations. Chatbots must disclose that users are interacting with an AI. No conformity assessment is required.
Minimal Risk
AI spam filters, recommendation engines, and similar tools face no specific obligations under the Act.
Key takeaway: The risk tier determines everything. Before advising a client, the first step is always classification. An appliedAI study found 40% of enterprise AI systems cannot be clearly classified under the current framework. Ambiguity at the classification stage is itself a legal risk.
Prohibited AI Practices: Article 5
Article 5 lists the practices banned since February 2, 2025. These represent the EU legislature's judgment that certain uses of AI are fundamentally incompatible with fundamental rights.
- Subliminal manipulation - AI that uses techniques beyond a person's consciousness to distort their behaviour in harmful ways.
- Exploitation of vulnerabilities - Systems targeting people based on age, disability, or socio-economic situation.
- Social scoring by public authorities - General-purpose scoring based on social behaviour or inferred personal characteristics.
- Real-time remote biometric identification in public spaces - Law enforcement use with narrowly defined exceptions.
- Biometric categorisation by sensitive attributes - Inferring race, political opinions, sexual orientation, or religious beliefs from biometric data.
- Emotion recognition in workplaces and schools - With narrow exceptions for medical or safety purposes.
- Predictive policing based solely on profiling - Risk assessments based solely on personality traits or profiling.
Violations carry the highest fine tier: up to 35 million EUR or 7% of global annual turnover.
High-Risk AI Systems: The Eight Domains
Annex III lists the categories of AI systems classified as high-risk:
- Biometric identification and categorisation
- Critical infrastructure management - road traffic, water, gas, electricity, digital systems
- Education and vocational training - admissions decisions, exam scoring, learning assessments
- Employment and worker management - CV screening, recruitment, performance evaluation, termination
- Essential services - credit scoring, insurance risk assessment, social benefits eligibility
- Law enforcement - individual risk assessments, polygraphs, evidence evaluation
- Migration and border control - visa applications, asylum claims, border surveillance
- Administration of justice - AI used to assist courts in researching and applying law to facts
Key takeaway for legal professionals: AI used in the administration of justice is classified as high-risk. Any AI product marketed to courts or law firms for substantive legal analysis needs conformity assessment before deployment.
High-Risk Obligations
Providers of high-risk AI systems must meet these requirements before market placement (from August 2, 2026):
- Risk management system - continuous, iterative process across the system's lifecycle
- Data governance - training datasets must meet quality criteria with biases examined
- Technical documentation - prepared before market placement and kept up to date
- Record-keeping and logging - automatic logging of events throughout the system's lifetime
- Transparency - clear instructions about purpose, limitations, and risks
- Human oversight - technical features enabling effective oversight by natural persons
- Accuracy, robustness, and cybersecurity
- Conformity assessment - self-assessment or third-party, depending on system type
- CE marking and registration in the EU database
General-Purpose AI Models
General-purpose AI models (GPAI) are a distinct category addressing foundation models and large language models. All GPAI providers must prepare technical documentation, provide information to downstream providers, establish copyright compliance policies, and publish training data summaries.
GPAI models with systemic risk (trained with compute exceeding 10^25 FLOPs) face additional requirements including adversarial testing, serious incident reporting, and cybersecurity safeguards.
The Implementation Timeline
- August 1, 2024 - Act enters into force. EU AI Office established.
- February 2, 2025 - Prohibited practices (Article 5) and AI literacy obligations enforceable.
- August 2, 2025 - GPAI model rules apply. National competent authorities designated.
- August 2, 2026 - High-risk system obligations (Annex III), transparency rules, and market surveillance take effect. The major commercial deadline.
- August 2, 2027 - Full applicability. Remaining Annex I high-risk obligations and GPAI provider transition deadline.
Key takeaway: The February 2025 prohibitions are already in force. The August 2026 deadline is the next major pressure point for compliance programs.
The Penalty Structure
- Tier 1: Prohibited practices violations. Up to 35 million EUR or 7% of global annual turnover.
- Tier 2: High-risk and other obligations. Up to 15 million EUR or 3% of global annual turnover.
- Tier 3: Supplying misleading information to authorities. Up to 7.5 million EUR or 1% of global annual turnover.
For SMEs and startups, the lower of the absolute and percentage-based cap applies.
National Implementation Status
As of early 2026, only Denmark and Italy have enacted dedicated national AI legislation. Most member states are still designating competent authorities, and the enforcement landscape is fragmented. Compliance obligations are uniform, but enforcement intensity will vary by jurisdiction.
The Digital Omnibus Complication
The Digital Omnibus proposal includes provisions that would delay the August 2026 high-risk deadline and narrow certain Annex III categories. However, as of March 2026, the proposal has not entered trilogue. Advising clients to delay compliance in anticipation carries real risk: if the proposal stalls, clients relying on a delayed timeline will be exposed.
The practical advice: plan compliance against the current August 2026 deadline while monitoring the Digital Omnibus closely.
What This Means for Legal Careers
The AI Act creates demand across multiple practice areas. Employment lawyers need to understand high-risk rules for recruitment AI. Financial services lawyers need to cover credit scoring obligations. Criminal practitioners need to understand constraints on law enforcement AI.
AI tools used in the administration of justice are high-risk under Annex III. This includes tools that assist in interpreting or applying law to facts. Understanding these obligations makes you a better user of legal AI and a better adviser to clients adopting the same tools.
How to Research the AI Act
The primary text is on EUR-Lex (search for Regulation 2024/1689). artificialintelligenceact.eu provides article-by-article analysis with commentary. The EU AI Office publishes guidance documents and template documentation.
For cross-referencing the AI Act with related regulations like the GDPR, Product Liability Directive, or sector-specific frameworks, tools like Venato can pull citations across instruments without manually navigating multiple EUR-Lex searches.
Where the AI Act Is Heading
The Act is a live regulatory instrument. The Commission can update the Annex III list, adjust the GPAI systemic risk threshold, and issue delegated acts on conformity assessment. The Digital Omnibus negotiations will be the next major legislative event.
Junior lawyers who build fluency in this framework now, while it is still early, will be better positioned than those who wait for the case law to arrive. The AI Act is also a template. Canada, the UK, Brazil, and other jurisdictions are developing AI frameworks influenced by the EU's risk-based model. EU AI Act expertise translates.