← All Posts

There you have it! The Pentagon is now planning to let AI companies train their models on classif…

March 18, 2026 · 0 likes · 0 comments
Defense Cybersecurity AI
There you have it! The Pentagon is now planning to let AI companies train their models on classified data.

Read that again. CLASSIFIED data. Government secrets. Intelligence assessments. Warfighting plans. The most sensitive information the United States government possesses. And they are going to feed it into large language models.

I spent years as CSO deploying AI on classified networks. I know exactly what this means. Done right, it could produce AI capabilities that genuinely transform warfighting. Done wrong — and given how bureaucracies work, "done wrong" is the default — it becomes the largest intelligence disaster in American history.

Here is what nobody is talking about: training an LLM on classified data does not just mean the model learns from it. It means that data could surface in unexpected ways. Weights can be extracted. Models can be probed. Training data can be reconstructed. The entire history of AI security research tells us that what goes IN does not necessarily stay IN.

And here is the part that makes me furious. The Pentagon BANNED Anthropic — the company that BUILT the most safety-focused AI in the world, the company whose Claude was the MOST widely deployed frontier AI model across the entire Department of War — because of "supply chain risk." They called Claude a supply chain risk. While planning to give the remaining AI companies access to CLASSIFIED TRAINING DATA.

Let me be very direct. If you are going to train AI models on some of the most sensitive intelligence in American history, you want the companies that take safety and security the MOST seriously. That is Anthropic. Period. Their Constitutional AI approach, their safety research, their alignment work — that is exactly what you need when you are handling classified information.

Instead we banned them. And now we are handing classified data to whoever is left.

I have seen how this plays out in government. The technical people raise the risks. The procurement people ignore the risks. The contractors promise the capabilities. And years later everyone wonders why the classified data ended up somewhere it was not supposed to be.

Getting AI right in defense requires getting the basics right first. Proper data governance. Proper model security. Proper access controls. Proper red teaming. Not just plugging classified data into whatever model passed a PowerPoint review.

The potential here is real. The risk of getting it catastrophically wrong is also real. And we just made it harder to get right by eliminating the company most focused on making it safe.

What are your thoughts?
View original on LinkedIn →