Microsoft filed a legal brief requesting a temporary restraining order to stop the Pentagon from blacklisting AI firm Anthropic. The company argues that banning Anthropic’s frontier model—currently the Pentagon’s most widely deployed AI—would disrupt military operations and undermine U.S. leadership in artificial intelligence.
The action supports a lawsuit filed by Anthropic against the U.S. government. Anthropic alleges the government retaliated after the company refused to permit its Claude AI model for use in autonomous lethal warfare and mass domestic surveillance.
Microsoft’s brief aligns with other AI industry insiders who have also filed in support of Anthropic. The coalition maintains that advanced AI should not be utilized for lethal combat or mass surveillance purposes.