Microsoft has stepped into the courtroom to fight alongside Anthropic in its legal battle against the Pentagon’s unprecedented decision to label the AI company a supply-chain risk, filing an amicus brief in a San Francisco federal court. The brief argued that without an immediate temporary restraining order, the designation could cause irreversible harm to the technology networks that support both national defense and commercial AI applications. The filing is part of a broader wave of industry support that includes Amazon, Google, Apple, and OpenAI.
The Pentagon’s designation was triggered by the collapse of a $200 million contract negotiation in which Anthropic refused to allow its technology to be used for mass surveillance or autonomous weapons. Defense Secretary Pete Hegseth applied the supply-chain risk label, which has historically been reserved for companies linked to adversarial foreign nations, to an American company for the first time. The move led immediately to the cancellation of Anthropic’s existing government contracts and a public statement from the Pentagon’s technology chief that renegotiation was impossible.
Microsoft builds military systems using Anthropic’s AI and is a major partner in the Pentagon’s $9 billion cloud computing contract, giving it a direct stake in the outcome of this litigation. The company also holds additional agreements with defense, intelligence, and civilian agencies worth several billion dollars more. Microsoft’s public statement called for a united effort by government, the tech sector, and the public to ensure advanced AI serves national security without being weaponized for surveillance or unauthorized warfare.
Anthropic’s lawsuits in California and Washington DC argue that the supply-chain risk designation was applied in retaliation for the company’s public stance on AI safety and therefore violates its First Amendment rights. The company’s court filings disclosed that it does not have confidence in Claude’s ability to safely support lethal autonomous decision-making, which it said was the genuine technical basis for the restrictions it sought. Anthropic emphasized that no US company had ever previously been subjected to this designation.
Congressional investigators are adding pressure from another direction by seeking answers about whether AI was used in a US military strike in Iran that reportedly killed over 175 civilians at an elementary school. Lawmakers are demanding to know what role, if any, AI targeting tools played and what human review processes were in place. The combination of Anthropic’s lawsuits, Microsoft’s court filings, and congressional inquiries is creating a perfect storm of accountability around the use of AI in US military operations.
