On 26 March 2026, the United States District Court for the Northern District of California granted a preliminary injunction in favour of Anthropic, blocking three government actions taken against the company following its public refusal to remove certain usage restrictions on its artificial intelligence model, Claude. These restrictions included prohibitions on uses such as mass surveillance and lethal autonomous warfare. The measures targeted Anthropic in its capacity as an AI developer supplying services to federal agencies and defence contractors. They included a Presidential Directive barring federal agencies from using Anthropic’s technology, a directive from the Secretary of War excluding Anthropic from engagement with defence contractors, and a formal designation of Anthropic as a “supply chain risk to national security”. The Court found that the measures were likely unlawful. It considered that they may constitute retaliation in violation of the First Amendment to the United States Constitution, may infringe protections under the Fifth Amendment to the United States Constitution due to the absence of prior notice or procedural safeguards, and may involve an incorrect application of the statutory framework governing supply chain risks.
Original source