Anthropic CEO Refuses Pentagon's Demands for Claude AI in Defense Contracts
Anthropic CEO Dario Amodei has rejected the Pentagon's request for unrestricted use of their AI technology, Claude, for military purposes, including autonomous weapons and mass surveillance. This refusal has led to tensions between Anthropic and the Department of Defense, which threatens to label the company a supply chain risk and cancel contracts. Despite the Pentagon's insistence on the necessity of Claude for national security, Anthropic stands firm on its safety protocols, which require human oversight in AI applications. Negotiations continue as both sides remain at an impasse.

Anthropic's CEO Dario Amodei has publicly declined the Pentagon's request for unrestricted deployment of Claude AI across military applications, including autonomous weapons and mass domestic surveillance. The Pentagon has threatened to designate the company a supply chain risk and cancel contracts if it does not comply.
Anthropic insists on written guarantees that Claude will not be used in lethal or surveillance systems, arguing that current AI capabilities are insufficient for such roles. The ongoing standoff reflects a broader conflict between AI safety and military operational needs, with negotiations ongoing as deadlines loom.




Comments