EU AI Act 2026: What's Enforced Now and What Global Builders Need to Know

Abhishek Gautam··12 min read

Quick summary

The EU AI Act is in force. Which rules apply now, which are coming, and what developers and product teams outside the EU must do. High-risk systems, general-purpose AI, and practical steps for global reach.

The EU AI Act is no longer a draft — it is law. Phased implementation is underway, and the first enforcement actions and guidance are appearing. Whether you are based in the EU or building for a global audience, the Act affects how you design, deploy, and document AI systems. Here is what is enforced now, what is coming, and what builders need to do.

What the EU AI Act Is (Short Version)

The AI Act is a risk-based regulation. It bans certain AI uses (e.g. social scoring, manipulative subliminal techniques, real-time biometric identification in public spaces with narrow exceptions). It imposes strict obligations on high-risk AI systems (e.g. critical infrastructure, education, employment, essential services, law enforcement). It regulates general-purpose AI models (GPAs) — foundation models and their downstream use — with tiered rules for "high-impact" models. It leaves most other AI uses subject to transparency and basic standards. Fines are significant (up to a percentage of global turnover); enforcement is by national authorities and the new EU AI Office.

What Is Enforced in 2026

Already in force (or entering force in 2026):

  • Prohibitions on the banned practices (as per the Act's timeline).
  • Rules for general-purpose AI models: transparency (e.g. documentation, disclosure of training data characteristics), and for the most capable "high-impact" models, stricter obligations (e.g. model evaluation, risk management, incident reporting).
  • Phased application of high-risk AI obligations: once the relevant annexes and implementing acts are in place, providers and deployers of high-risk systems must meet requirements on data quality, transparency, human oversight, and conformity. Timelines differ by category; 2026 is when many of these start to bite.

What you see in practice: Guidance from the EU AI Office and national bodies, first conformity and documentation expectations, and the first enforcement actions where systems fall clearly under bans or high-risk rules. Companies selling or deploying AI in the EU are auditing their systems, mapping to the risk tiers, and preparing technical and organisational documentation.

High-Risk AI: Who Is Affected

High-risk systems are those used in areas such as: critical infrastructure (energy, transport, water); education and vocational training; employment and worker management; access to essential private and public services (e.g. credit, benefits); law enforcement and justice; migration and border control; and certain democratic processes. If your AI is used for hiring, scoring, admissions, or critical infrastructure control, it is likely high-risk. You need: robust data governance, transparency (e.g. instructions for use, logs), human oversight, accuracy and robustness measures, and conformity assessments. Non-EU providers placing such systems on the EU market are also in scope.

For developers: Build documentation and logging into the product from the start. Know your training data, your model limitations, and your deployment context. Map your system to the Act's annexes; if in doubt, get legal or regulatory advice. High-risk is not "everything with ML" — but it covers a lot of impactful use cases.

General-Purpose AI and Foundation Models

GPAs (including large language models and multimodal models) are regulated in two tiers. All GPA providers must meet transparency and documentation obligations. The most capable models (designated "high-impact" by criteria such as capability benchmarks and scale) face additional obligations: model evaluation, risk management, adversarial testing, and incident reporting. Downstream providers that integrate GPAs into their own products must ensure compliance of the combined system (e.g. if the end use is high-risk, the full chain must comply).

For developers: If you use a third-party foundation model (e.g. OpenAI, Google, Anthropic, Mistral), the model provider carries part of the compliance burden — but you are responsible for how you use the model. If your application is high-risk (e.g. resume screening, credit scoring), you need to document the integration, ensure human oversight, and meet high-risk obligations. If you fine-tune or train your own model and place it on the EU market, you may be a GPA provider or a high-risk provider; the same logic applies.

Global Reach: Why It Matters Outside the EU

The AI Act has extraterritorial effect. If you offer AI systems to users in the EU — whether you are in the US, India, or anywhere else — you can be in scope. So: if your product is used for hiring in Germany, credit in France, or critical infrastructure in the Netherlands, the Act applies. Many global teams are therefore doing "EU-first" compliance: meet the EU bar, then adapt for other jurisdictions (e.g. US state laws, China's rules) as they emerge. That is a reasonable strategy: the EU is often the strictest; complying for the EU puts you ahead elsewhere.

Practical Steps for Builders

Map your systems: List your AI use cases. For each, determine: banned? High-risk? GPA integration? Other? Use the Act's annexes and official guidance; get help if the classification is unclear.

Document everything: Training data (sources, limitations), model behaviour, known limitations, instructions for use, and logs where relevant. This is not optional for high-risk or GPA integration.

Human oversight and governance: High-risk systems require human oversight. Define roles, escalation paths, and review processes. Governance is not just legal — it is product and engineering.

Monitor the AI Office and national authorities: Guidance and delegated acts will clarify details. Subscribe to updates from the EU AI Office and your national competent authority. Timelines and interpretations will evolve.

Plan for audits and conformity: High-risk systems will need conformity assessments (self-assessment or third-party, depending on the case). Build auditability into your pipeline: versioning, change logs, and clear ownership.

The EU AI Act 2026 is the new baseline for selling or deploying AI in Europe — and a reference for the world. Builders who treat it as a product requirement, not an afterthought, will ship with confidence and avoid the first wave of enforcement. Start mapping, document early, and stay tuned.

Free Tool

Will AI replace your job?

4 questions. Get a personalised developer risk score based on your stack, role, and what you actually build day to day.

Check Your AI Risk Score →
ShareX / TwitterLinkedIn

Written by

Abhishek Gautam

Full Stack Developer & Software Engineer based in Delhi, India. Building web applications and SaaS products with React, Next.js, Node.js, and TypeScript. 8+ projects deployed across 7+ countries.

Free Weekly Briefing

The AI & Dev Briefing

One honest email a week — what actually matters in AI and software engineering. No noise, no sponsored content. Read by developers across 30+ countries.

No spam. Unsubscribe anytime.