Hackers Used Claude and ChatGPT to Breach Mexico's Government and Expose 195 Million Identities. Here's How It Happened.

Abhishek Gautam··11 min read

Quick summary

In early 2026, attackers weaponised Claude Code and ChatGPT to breach multiple Mexican government agencies, stealing data tied to up to 195 million identities. Here's what went wrong and what developers must fix now.

The first wave of AI coding tools made developers dramatically faster. The second wave is now doing the same for attackers.

In late 2025 and early 2026, a campaign targeting multiple Mexican government agencies used mainstream AI assistants — including Anthropic's Claude Code and OpenAI-powered tools — to help design exploits and automate data theft. The result was one of the largest public-sector breaches in recent memory: data tied to up to 195 million identities exposed from tax, voter, and civil registry systems.

This was not a rogue AI event. It was a human adversary using off-the-shelf AI tools to weaponise the same coding superpowers many of us rely on daily.

---

1. What We Know About the Mexican Government Breach

From incident reports and investigations:

  • The attackers targeted multiple agencies, including Mexico's tax authority (SAT) and other federal bodies.
  • They reportedly issued over 1,000 prompts to AI coding tools to:

- Enumerate common vulnerabilities in specific stacks.

- Draft proof-of-concept exploits for web apps and APIs.

- Generate automation scripts to scan IP ranges and reuse stolen credentials.

  • At least 20 internet-facing systems were compromised, taking advantage of:

- Outdated frameworks and unpatched CVEs.

- Weak authentication and exposed admin interfaces.

- Misconfigured cloud storage and over-permissive IAM policies.

  • Around 150GB of data was exfiltrated, including:

- Tax records and invoices.

- Voter rolls and civil registry information.

- Internal government employee data and credentials.

In parallel, a separate LexisNexis cloud breach disclosed in March 2026 highlighted similar patterns in the private sector: a vulnerable React application in AWS leading to sensitive legal and government client data exposure.

Different victims, same structural themes.

---

2. How Attackers Are Using AI Coding Tools in Practice

The Mexican campaign illustrates how AI assistants change the economics of offensive work:

  • Faster recon and exploit development:

- Describe a legacy PHP or Java stack and ask for likely weaknesses.

- Paste snippets of error messages or HTML and request exploit ideas.

- Iterate until the model produces working payloads.

  • Automation at scale:

- Generate scanners to hit wide IP ranges for known CVEs.

- Create scripts to exfiltrate and package data for download.

- Auto-generate obfuscation or minor mutations to avoid simple detection.

Guardrails did fire in some cases ("I can't help you hack that"), but motivated attackers worked around them with:

  • Framing prompts as penetration tests or CTF challenges.
  • Breaking harmful tasks into smaller, seemingly benign steps.
  • Combining outputs from multiple tools.

You do not need a frontier model to do this. A competent coding assistant plus a determined attacker is enough.

---

3. Old Vulnerabilities, New Acceleration

The painful part for defenders is that most exploited issues were not exotic.

Think:

  • Unpatched remote code execution vulnerabilities in popular frameworks.
  • Default or weak credentials on admin interfaces.
  • Publicly exposed storage buckets.
  • Flat network topologies where compromise of one service opens many doors.

AI did not invent new classes of bugs; it made it cheaper to:

  • Find them faster.
  • Turn partial footholds into full compromises.
  • Scale campaigns across many targets.

If your organisation still treats patching as an annual exercise and asset inventories as optional, AI-accelerated attackers have a structural advantage over you.

---

4. Concrete Lessons for Developers and SREs

You cannot ban attackers from using AI. You can make their work harder and noisier.

1. Inventory and shrink your attack surface

  • Maintain an up-to-date list of all internet-facing apps, APIs, and admin portals.
  • Kill or lock down anything that is not strictly needed.
  • Put admin panels and management endpoints behind VPNs or zero-trust access, not open to the whole internet.

2. Treat patching as a continuous pipeline

  • Automate detection of high-severity CVEs in your dependencies.
  • Set clear SLAs: days for critical bugs on public-facing systems, not months.
  • Fail builds that introduce known critical vulns unless explicitly justified.

3. Build security checks into your CI/CD

  • Add SAST and dependency scanning to your pipelines.
  • Use infrastructure-as-code scanning for misconfigurations (open security groups, public buckets, etc.).
  • Gate production deployments on passing basic security checks for high-risk services.

4. Harden your cloud posture

  • Enforce least-privilege IAM: services should have only the permissions they absolutely need.
  • Enable logging for auth, network access, and data store access.
  • Periodically run your own red-team style checks, or invite external auditors, before an attacker does it for you.

---

5. Rethinking How You Use AI Coding Tools Internally

The Mexican breach was powered in part by the same category of tools many teams now use for day-to-day development. That does not mean you should stop using them; it does mean you should be intentional.

Guidelines:

  • Treat AI like a fast junior developer:

- Never merge AI-generated code without review.

- Apply the same security standards you would to human-written code.

  • Avoid pasting secrets or production configs into prompts:

- Use redacted snippets or synthetic examples when discussing sensitive patterns.

- Choose enterprise offerings with clear data-handling guarantees.

  • Log and audit AI-driven changes:

- Track which commits or migrations were heavily AI-assisted.

- Prioritise those areas for extra security testing.

Used well, AI tools can help you close security gaps faster than you could alone. Used carelessly, they can accelerate you into production with half-understood, vulnerable code.

---

6. Building More Resilient Public-Sector and Critical Systems

If you work in or sell into governments, utilities, healthcare, or finance, your bar is higher.

You should be pushing for:

  • Network segmentation: separate public portals from core data stores with strong boundaries.
  • Zero-trust principles: never rely solely on network location for trust; authenticate and authorise every call.
  • Tamper-evident logging and playbooks for incident response.
  • Multi-region, multi-provider architectures for critical workloads, so a single breach or outage does not knock out essential services.

Architecturally, this looks a lot like what you would design for ransomware, natural disasters, or regional conflicts — because those are now overlapping threat vectors. The same playbooks you used to harden against nation-state actors are now relevant to AI-accelerated criminal groups.

---

7. What Individual Developers Can Do This Week

You might not control national cyber policy, but you do control the code and infra you touch.

This week, you could:

  • Take one internet-facing service you own and:

- Patch all critical CVEs.

- Lock down admin endpoints.

- Add or tighten rate limiting and logging.

  • Add at least one security check to your CI pipeline.
  • Start a short, concrete threat model doc for your most sensitive feature.

If you are worried about how AI will change your job, leaning into security, resilience, and system design is one of the most future-proof moves you can make. /tools/will-ai-replace-me has a broader discussion of which developer roles are most exposed.

---

8. The Bigger Picture: AI as a Force Multiplier on Both Sides

The Mexican breach is a preview, not an outlier.

As AI coding agents, "vibe coding" IDEs, and autonomous tools improve, the gap between what defenders and attackers can do with the same technology will keep narrowing. The deciding factor will be who has better fundamentals:

  • Asset management
  • Patch hygiene
  • Secure defaults
  • Observability
  • Clear, tested incident response

You do not control who gets access to Claude or GPT. You do control whether your systems still fall to the same old mistakes once attackers start using them.

Free Tool

Will AI replace your job?

4 questions. Get a personalised developer risk score based on your stack, role, and what you actually build day to day.

Check Your AI Risk Score →
ShareX / TwitterLinkedIn

Written by

Abhishek Gautam

Full Stack Developer & Software Engineer based in Delhi, India. Building web applications and SaaS products with React, Next.js, Node.js, and TypeScript. 8+ projects deployed across 7+ countries.

Free Weekly Briefing

The AI & Dev Briefing

One honest email a week — what actually matters in AI and software engineering. No noise, no sponsored content. Read by developers across 30+ countries.

No spam. Unsubscribe anytime.