Anthropic filed a lawsuit against the U.S. Department of Defense on March 9, 2026, to challenge a recent Pentagon directive. The legal conflict centers on the company's refusal to remove safety guardrails from its AI models.
These safeguards prevent Anthropic technology from being used for domestic mass surveillance. The company also maintains restrictions against deploying its models in autonomous weapons systems that lack human oversight.
The Pentagon pressured Anthropic to allow its technology to be used for all lawful purposes. After the company refused, the Department of Defense formally designated Anthropic a supply-chain risk.
This designation effectively prohibits government contractors from using Anthropic products. The company faces severe financial consequences as a result of the blacklisting.