← All Posts

There you have it. A federal judge just blocked the Pentagon's blacklisting of Anthropic — and th…

March 27, 2026 · 0 likes · 0 comments
Defense Cybersecurity Workforce AI
There you have it. A federal judge just blocked the Pentagon's blacklisting of Anthropic — and the ruling did not mince words.

"Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation."

Read that again. A sitting federal judge just said the United States Department of War violated the First Amendment to punish a private company for publicly disagreeing with it.

I said it back in March when this started: this was a mistake. Emil Michael's mistake. Anthropic built the most safety-focused frontier AI in the world. Claude was the most widely deployed frontier AI model across classified networks. And Emil banned them because they refused to enable autonomous weapons and domestic surveillance. Allegedly. The reality was, his ego and incompetence were the issue.

Now a federal court has agreed what the rest of us already knew — this was not a national security decision. It was retaliation.

But here is the uncomfortable truth nobody wants to say out loud: even if the ban gets reversed, the damage is done. The precedent is set. Every AI company watching this saga now knows that building in the DoW means accepting that the DoW can weaponize procurement law to punish you for your views. The chilling effect on safety-focused AI companies entering the defense market will last for years.

We need AI companies that take safety seriously working WITH the government — not being blacklisted for it. The mission was never to have the most compliant AI. It was to have the most capable AND the most safe.

We got this one wrong. The judge said so.

I wrote about exactly this dynamic in my upcoming book, REPLACEMENT.

Now, let's make sure the supply chain risk designation gets removed too.

Time for Emil to go.
View original on LinkedIn →