← All Posts

There you have it. Anthropic built an AI model so dangerous they refused to release it to the pub…

April 22, 2026 · 0 likes · 0 comments
Cybersecurity AI
There you have it. Anthropic built an AI model so dangerous they refused to release it to the public.

An AI designed to run 32-step cyberattacks autonomously. No human needed.

They called it Claude Mythos Preview.

On the exact same day they handed limited access to Apple and Goldman Sachs — unauthorized users were already inside. Not through Anthropic directly. Through a third-party contractor.

JOKES WRITE THEMSELVES.

I have been warning about this for years. The supply chain IS the attack surface. Zero trust doesn't stop at your perimeter. It has to extend to every vendor, every API, every contractor holding a credential.

Anthropic built a machine designed to find and exploit security flaws. Then handed a third-party vendor access before locking down their own chain.

Most organizations have no idea what their AI vendors' security posture looks like. They sign the contract. They assume someone else secured the pipeline.

Nobody secured this one.

You've been warned.
View original on LinkedIn →