War with Iran + Pentagon vs Anthropic with Under Secretary of War Emil Michael — The All In Podcast
Emil Michael came into his role as Under Secretary of War last August and did what any good lawyer would do right away. He read the contracts! But what he found surprised him. And that led to a major dispute between Anthropic and the United States government that blew up in the media last weekend.
Part of the backstory is how Anthropic, one of the top AI companies in the world, got so embedded in the government in the first place. The Biden administration’s executive order on AI effectively limited compute capacity for most companies while grandfathering in a small number of selected winners. Anthropic was one of them. From there, the company executed a smart enterprise sales strategy and moved their software and engineers into the most sensitive parts of the government. This is a common strategy for tech firms selling into the largest customer on Earth.
So, by the time Michael arrived, Anthropic wasn’t just a vendor. It was woven into the daily workflows of some of the country’s most critical military commands including Central Command, the Indo-Pacific Command, and several intelligence departments. That history matters now because, as Michael acknowledged, untangling a deeply embedded technology partner is far harder than simply switching vendors. The other AI companies haven’t built out that kind of government infrastructure yet. They’re capable on the model side, but they have to catch up on everything else, which they certainly will do in short order.
But buried in Anthropic’s terms of service were restrictions that, from Michael’s perspective, made the software nearly unusable for its intended purpose — to plan, fight, and win wars.
“You can’t use them to plan a kinetic strike. You can’t use their AI model to move a satellite. You can’t do a war game scenario with it,” Michael explained on the All In Podcast. The Department of War, as he noted repeatedly, is pretty clearly stated right in the name. War.
What followed was three months of laborious negotiations arguing over scenario after scenario involving various military operations. Anthropic would grant an exception here and another one there. But Michael needed something broader. The military cannot predict every situation it will face in real time and into the future, and an AI model that requires pre-approved use cases is not a reliable operational tool. In fact, it would compromise national security and potentially endanger troops in the field. Instead, he pushed for a single standard — all lawful use — that he could apply to all AI vendors.
Then came the moment that accelerated the conflict. After the Venezuela mission, an Anthropic executive contacted Palantir, the prime contractor implementing Anthropic’s technology, and asked whether their software had been used in the raid. Since that information is classified, Palantir informed Michael. The implication was clear. If the answer were yes then Anthropic might consider that a terms of service violation and pull their software.
“What if the balloon’s going up at that moment and it’s like a decisive action we have to take,” Michael said. “I’m not going to call you to do something. It’s like not rational.” That phone call reference wasn’t a hypothetical situation. Michale said that Anthropic CEO Dario Amodei actually told him durning negotiations to just call him when issues came up. From Michael’s perspective, though, that’s obviously not a reasonable solution under the circumstances, especially involving combat and national security.
That conversation grew to a breaking point. Michael went to Secretary of War Pete Hegseth, who demanded that Anthropic lift the restrictions. The company refused. So, Anthropic was formally designated a supply chain risk, the first American company ever to receive that designation from the government. Generally, that’s reserved for enemies. As a result, the $200 million contract was cancelled. Now it’s Michael’s job to unwind Anthropic from their positions throughout the government.
The broader issue here goes well beyond the Pentagon. As Chamath Palihapitiya argued on the podcast, what Anthropic demonstrated is that any sufficiently powerful AI provider can, at any moment, change its terms of service based on the internal values of its employees, which seems to be an issue for this company. That is a significant business risk for governments, corporations, and anyone else who has built critical workflows on top of a single AI model. “It’s deplatforming times a thousand,” he said.
The situation is still fresh. But for now, the ball is in Dario Amodei’s court. Michael said plainly what he has always wanted, which is a reliable partner willing to support lawful use without requiring a phone call every time something comes up. That’s not an unreasonable ask. And it’s the same standard that Google, Grok, and OpenAI have all moved toward without any drama. Anthropic chose a different path. And in doing so, the company may have handed its competitors a significant opening inside massive government accounts it spent years cultivating.
Anthropic’s revenue and valuation have both been growing rapidly. But will that trend continue? It’s well known that AI engineers and advanced researchers will only stay where the work is interesting and the money is flowing freely. Future contracts will go where the terms make sense for the government. How Amodei responds now, and how quickly, will reveal whether Anthropic is a principled company or simply a difficult one with a political agenda.
