The US Department of Defense has issued a strict deadline to Anthropic in a growing dispute over how its artificial intelligence technology can be used for military purposes.
Defense Secretary Pete Hegseth told Anthropic CEO Dario Amodei that the company must agree to allow the military broader access to its AI models, including use on classified systems, by Friday evening — or face serious consequences.
Anthropic declined to ease its usage restrictions, which ban certain military applications such as autonomous targeting decisions without human oversight and mass domestic surveillance. The company says its ethical guidelines are essential to ensure responsible AI deployment.
Pentagon officials have warned that if Anthropic does not comply, the US may cancel its existing contract, label the firm a “supply chain risk,” or invoke the Cold War‑era Defense Production Act to compel cooperation on national security grounds.
Anthropic is the maker of the AI chatbot Claude and was among four AI companies that secured Pentagon contracts last year. Others include OpenAI, Google, and Elon Musk’s xAI, which agreed to terms without the same usage restrictions.
The Pentagon has also asked major defense contractors to assess their reliance on Anthropic’s technology, a move seen as a possible early stage toward blacklisting if the company is designated a supply chain risk.
Anthropic says it continues “good‑faith discussions” with the Defense Department to support national security while upholding its safety policies. The standoff highlights a broader debate on how advanced AI should be used in national defence and where ethical limits should apply.
