Trump Administration Designates Anthropic AI as National Security Risk, Partners with OpenAI
The Trump administration has barred Anthropic AI from federal use, labeling it a national security risk due to its refusal to grant unconditional access to its models for military applications. Following stalled negotiations, the Pentagon has now partnered with OpenAI, allowing its models to be integrated into classified networks. Anthropic plans to contest its designation in court, while a labor coalition calls for tech giants to resist military contracts that strip safety protocols. The move highlights the growing divide between ethical AI practices and defense needs.

The Trump administration has issued an executive order banning Anthropic AI from federal use, classifying it as a national security risk following its refusal to grant unrestricted access to its models for military purposes. The Pentagon has since partnered with OpenAI for the deployment of its AI models within Department of Defense networks.
Anthropic intends to challenge this designation legally. Meanwhile, a coalition representing over 700,000 tech workers has urged major companies to follow Anthropic's stance against military contracts that compromise safety. The situation showcases the conflict between ethical AI development and national defense requirements.




Comments