← All Posts

There you have it. Senator Slotkin just introduced the AI Guardrails Act — banning the DoD from d…

March 28, 2026 · 0 likes · 0 comments
China Threat Defense Workforce
There you have it. Senator Slotkin just introduced the AI Guardrails Act — banning the DoD from deploying autonomous weapons for lethal strikes without human authorization, banning AI for domestic surveillance, and banning AI from nuclear weapons launch decisions.

The instinct behind this legislation is right. The execution is naive.

And I say this as someone who has been one of the loudest critics of how DoW has handled AI — from the reckless Anthropic ban driven by ego and incompetence of Emil Michael, to the culture of deploying AI without proper governance. I have zero interest in defending bad decisions. But this law is also a bad decision — just in the other direction.

Here is the problem. Drone swarms travel faster than human reaction time. Hypersonic missiles close distances in seconds. In the first 60 seconds of a peer conflict against China's autonomous systems, "waiting for human authorization" is not a safety measure. It is a death sentence for the warfighters we are supposed to be protecting.

But here is what the debate always gets wrong: the choice is not "autonomous AI vs. human in the loop." That is a false binary. The real question is: how do we build AI systems that are MORE reliable than humans in high-stakes decisions? Because humans make mistakes too. Humans hesitate. Humans are slow. Humans have biases. The answer is not to replace AI judgment with human judgment — it is to build AI that earns trust through rigorous testing, measurement, and accountability.

That means:

• Define clear operational envelopes where autonomy is permitted
• Build explainability into every lethal decision chain
• Measure performance against human baselines — in simulation, in red team, in real conditions
• Create legal frameworks that reward getting it right, not frameworks that punish moving at all

The Anthropic situation showed us what happens when ego replaces doctrine. This legislation shows us what happens when fear replaces strategy. Neither is acceptable.

We need a third path. Not blanket deployment. Not blanket bans. Rigorous, measured, accountable AI — built to be better than us, verified to BE better than us, and governed by frameworks that can keep pace with the technology.

That is the hard work. That is what nobody wants to do. And it is the only answer that actually keeps Americans safe.

I wrote about exactly this in my upcoming book, REPLACEMENT.

Meanwhile, join us live, April 9th at 1PM ET, for the In the Nic of Time Rebirth: https://lnkd.in/eRY96Jvm
View original on LinkedIn →