Topic

AI

106 articles

The US Government Just Blacklisted an American AI Company for Refusing to Remove Safety Guardrails

On February 27, 2026, the Pentagon formally designated Anthropic a 'supply chain risk to national security' — the first time this label has ever been applied to a domestic US company. Anthropic refused to allow autonomous weapons and mass domestic surveillance. Hours later, OpenAI signed a Pentagon deal with the same guardrails. Here is what actually happened and why it matters globally.

·10 min read

Citadel's Ken Griffin Just Called Out the AI Hype — and He's Not Wrong

Ken Griffin, CEO of Citadel, said the $500 billion in AI data center spend this year requires promising to save the world. He also named a Harvard-identified phenomenon called the AI work flop — where AI output looks brilliant in the first two sentences and falls apart below that. Here is what he got right, what it means, and why the most credible AI critics are not the tech skeptics.

·8 min read

Claude Code Found 500 Security Bugs That Experts Missed for Decades. Moravec's Paradox Explains Why AI Cracked Cybersecurity First.

Anthropic's Claude Code can scan an entire codebase and find security vulnerabilities the way a skilled hacker would — and it already caught 500 real bugs in open source projects that human experts had missed for years. The reason this happened before AI learned to fold laundry is Moravec's Paradox, and it tells us something important about which jobs are actually safe.

·9 min read