From AI Act to AI Factories: How Europe Is Building a Regulated AI Super-Infrastructure

Abhishek Gautam··8 min read

Quick summary

The EU AI Act's enforcement timeline is active and the EU is simultaneously building AI Factories — national compute clusters for European AI development. Here is what the dual strategy means for developers, enterprises, and the global AI infrastructure landscape.

Europe is making two simultaneous bets on artificial intelligence — and they pull in opposite directions.

The first bet is regulation. The EU AI Act, which entered force in August 2024, is the world's most comprehensive legal framework for artificial intelligence. Its enforcement provisions are rolling out in phases through 2027, with the most significant obligations for AI developers and deployers starting in February 2025 and expanding through 2026.

The second bet is infrastructure. The EU's AI Factories programme — part of the broader EuroHPC Joint Undertaking — is spending billions to build national AI compute clusters across Europe, explicitly designed to ensure that European AI development does not depend entirely on US hyperscaler infrastructure.

Together, these two bets represent Europe's theory of AI sovereignty: build domestic compute capacity while setting the rules that govern how AI is developed and deployed globally. Whether this strategy can compete with the unregulated acceleration happening in the US and China is the defining question for European tech in 2026.

The EU AI Act Enforcement Timeline

The EU AI Act uses a phased enforcement approach:

August 2024: Act entered into force. Six-month countdown to first enforcement deadline began.

February 2025 (active now): Prohibited AI practices banned. These include AI systems that:

  • Manipulate users through subliminal techniques beyond their awareness
  • Exploit vulnerabilities of specific groups (age, disability, social/economic situation)
  • Implement social scoring by public authorities
  • Use real-time remote biometric identification in public spaces (with narrow exceptions for law enforcement)
  • Deploy emotion recognition in workplaces or educational institutions
  • Create facial recognition databases by scraping internet images

Any AI system deployed in the EU that falls into these prohibited categories is now illegal. Penalties for violations: up to €35M or 7% of global annual turnover, whichever is higher.

August 2025 (active now): General-purpose AI (GPAI) model obligations began. This is where it gets directly relevant to AI developers:

  • All GPAI model providers must maintain technical documentation, comply with EU copyright law for training data, publish a sufficiently detailed summary of training data used.
  • Systemic risk GPAI models (those trained with more than 10^25 FLOP, approximately the compute threshold for GPT-4-scale models) face additional obligations: adversarial testing (red-teaming), incident reporting to the EU AI Office, cybersecurity measures, and energy efficiency reporting.

August 2026 (upcoming): High-risk AI system obligations fully apply. This covers AI systems in:

  • Critical infrastructure (energy, water, transport)
  • Education (access to educational institutions, assessment)
  • Employment (recruitment, work performance monitoring, promotion, termination)
  • Essential services (credit scoring, insurance risk assessment)
  • Law enforcement
  • Migration and border control
  • Administration of justice

High-risk AI systems require: conformity assessment, registration in EU database, human oversight mechanisms, technical documentation, post-market monitoring, and transparency to users.

What GPAI Obligations Mean for Developers

The GPAI provisions (active since August 2025) affect any company that develops or deploys a general-purpose AI model accessible to EU users. This includes:

OpenAI (GPT-4o, o3): Must publish training data summaries, maintain technical documentation, conduct red-teaming. GPT-4o almost certainly meets the systemic risk threshold.

Anthropic (Claude): Same obligations. Claude 3.5+ models likely meet systemic risk threshold.

Google DeepMind (Gemini): Same obligations.

Meta (Llama 3, Llama 4): Open-weight models create interesting compliance questions. Meta has argued that open-weight models should have reduced obligations since they cannot control downstream use. The EU AI Office has not fully resolved this question.

Smaller model developers: Models below the 10^25 FLOP threshold face lighter obligations — primarily documentation and copyright compliance for training data. This keeps the compliance burden manageable for European AI startups.

The EU AI Office (established under the Act) is the primary enforcement body for GPAI models. National competent authorities handle other AI categories.

The AI Factories Programme

Simultaneously with AI Act enforcement, the EU is building AI Factories — national supercomputing facilities specifically optimised for AI training and inference. This is funded under the EuroHPC Joint Undertaking and national co-funding.

Current AI Factory deployments:

LUMI (Finland): One of the world's most powerful supercomputers, located in Kajaani. AMD Instinct MI250X GPUs. 550 petaFLOPS at launch, expanding. Accessible to European researchers and companies via PRACE/EuroHPC allocation process.

Leonardo (Italy): The Cineca supercomputer in Bologna. NVIDIA A100 GPUs, 250 petaFLOPS. Significant allocation for AI workloads.

MareNostrum 5 (Spain): BSC in Barcelona. Mix of Intel Xeon Max (Sapphire Rapids HBM) and NVIDIA Hopper GPUs.

JUPITER (Germany): Jülich Supercomputing Centre. Intel Gaudi 3 AI accelerators as the primary AI compute fabric — a deliberate choice to avoid dependence on NVIDIA hardware. 1 exaFLOP target.

Adastra (France): GENCI / CEA. AMD MI250X. 46.1 petaFLOPS.

The EU is also funding frontier AI Factory facilities that will target frontier-scale model training capability — equivalent to the compute used for GPT-4 or Claude 3 Opus. These are in planning/procurement phases as of 2026.

The Sovereignty Tension

The AI Factories programme exists because of a specific fear: that European AI research and development will become entirely dependent on US hyperscaler infrastructure (AWS, Azure, GCP) operating under US law and US export control regimes.

This fear has concrete basis. EU researchers submitting to US cloud services for compute are subject to:

  • US export control regulations (EAR) that could restrict access if a researcher's institution or research topic becomes subject to control
  • US CLOUD Act provisions allowing US law enforcement to access data stored on US cloud infrastructure
  • Potential disruption if US-EU relations deteriorate (not currently a risk, but European policymakers plan for scenarios)

The AI Factories provide European sovereignty over AI compute at the research level. Whether they are competitive with commercial hyperscaler infrastructure for production AI applications is a separate question — and the answer is generally no, yet. LUMI and Leonardo are excellent for research; they are not cost-competitive with AWS for production inference workloads.

What This Means for Developers Building for EU Markets

Compliance is now an active requirement, not a future concern. If you deploy AI to EU users and your system falls into high-risk categories (launching August 2026), or if you provide a GPAI model (obligations since August 2025), you need a compliance programme in place.

Start with classification. Map every AI system you deploy against the EU AI Act risk categories. Most AI features in consumer apps will not be high-risk. But recruitment tools, credit assessment tools, educational assessment tools, and employee monitoring tools almost certainly are.

Document your training data. The GPAI copyright compliance requirement means you need to know what data your model (or any model you fine-tune) was trained on and be able to represent that you have rights to use it. If you are using third-party models, obtain documentation from the provider.

Implement human oversight mechanisms for high-risk systems. The Act requires that high-risk AI systems be designed to allow human override and that deployers maintain logs of system decisions for post-market monitoring.

Consider EU AI Factories for research workloads. If you are a European researcher or startup, EuroHPC allocation provides cost-effective access to substantial GPU compute without US infrastructure dependence.

Europe's dual AI strategy — regulate and build — is ambitious and coherent. Whether it produces global AI competitiveness or just global regulatory templates is still being determined.

Free Weekly Briefing

The AI & Dev Briefing

One honest email a week — what actually matters in AI and software engineering. No noise, no sponsored content. Read by developers across 30+ countries.

No spam. Unsubscribe anytime.

Free Tool

Will AI replace your job?

4 questions. Get a personalised developer risk score based on your stack, role, and what you actually build day to day.

Check Your AI Risk Score →
ShareX / TwitterLinkedIn

Written by

Abhishek Gautam

Full Stack Developer & Software Engineer based in Delhi, India. Building web applications and SaaS products with React, Next.js, Node.js, and TypeScript. 8+ projects deployed across 7+ countries.