EU AI Act
Regulation (EU) 2024/1689
The world's first comprehensive legal framework for artificial intelligence. Classifies AI systems by risk level and assigns obligations proportional to potential harm.
If your company builds software that makes predictions, recommendations, or decisions -- and that software touches anyone in Europe -- you should know about the EU AI Act. It is the first law in the world that regulates artificial intelligence across an entire economy. It was passed in 2024, it is already partially in effect, and its biggest deadlines land in 2026 and 2027.
The basic idea is proportional regulation. Not all AI is treated the same. A spam filter or a video game recommendation engine faces almost no rules. But an AI system that helps decide who gets a mortgage, who gets hired, or who gets flagged at a border crossing is considered high-risk and must meet strict requirements: documentation, testing, human oversight, and public registration. A small number of AI uses -- like government social scoring or covert manipulation of vulnerable people -- are banned entirely.
The law also covers large foundation models (think GPT, Gemini, or Claude) separately. Companies that train these models must publish summaries of their training data, comply with copyright rules, and -- if the model is powerful enough -- run safety evaluations and report serious incidents. These rules already apply as of August 2025.
Enforcement is serious. Fines can reach EUR 35 million or 7% of a company's worldwide annual revenue, whichever is higher. That said, the EU has built in protections for startups and small businesses: for SMEs, penalties are calculated using the lower of the two figures rather than the higher. The law is not designed to stop innovation. It is designed to make sure AI that can cause real harm to people is built and used responsibly.
The EU AI Act is the first law anywhere in the world that regulates artificial intelligence across an entire economy. It applies to any company that builds, sells, or uses AI systems in the European Union -- regardless of where that company is based. If your AI touches EU residents, this law likely applies to you.
The core idea is simple: the riskier the AI, the stricter the rules. A spam filter has almost no obligations. An AI system that decides who gets a loan or who gets hired faces heavy requirements -- documentation, testing, human oversight, and registration in a public EU database. A handful of AI practices, like mass surveillance scoring or manipulating vulnerable people, are banned outright.
The law does not apply all at once. It phases in over three years, starting with bans on the most dangerous practices (already in effect) and ending with rules for AI embedded in regulated products like medical devices and cars (August 2027). Most companies building or deploying AI need to be ready by August 2026.
Penalties are severe: up to EUR 35 million or 7% of global annual turnover for the worst violations. Even providing incorrect information to regulators can cost up to EUR 7.5 million or 1% of turnover.
The AI Act's obligations phase in over three years. Each phase adds requirements for a new category of AI systems.
The AI Act's defining feature is its risk-based approach. Every AI system falls into one of four tiers. Click each tier to see obligations and examples.
Annex III defines eight categories of AI systems that are automatically classified as high-risk. If your system falls into any category below, it is subject to the full set of high-risk obligations.
The AI Act creates distinct obligations based on your role in the AI value chain. The most common confusion: which role applies to your organisation?
The AI Act's newest and most debated provisions target GPAI models -- foundation models like large language models that can be adapted for many tasks. Rules took effect August 2025.
Select your company type for tailored compliance guidance.