On 9 March 2026, Anthropic filed a complaint for injunctive relief in the Northern District Court of California against the Department of War and sixteen other federal agencies, challenging three government actions taken against the company after it refused to remove usage restrictions on its Artificial Intelligence (AI) model Claude. The complaint stated that the actions targeted Anthropic as an AI developer that had been supplying services to federal defence and intelligence agencies since 2024, holding a Top Secret facility security clearance, and a projected public sector revenue of several hundred million dollars in 2026. Anthropic brought five claims against the government, including that the supply chain risk designation was legally baseless and improperly applied, that punishing Anthropic for publicly expressing its views on AI safety violated the First Amendment, that the Presidential Directive exceeded the President's constitutional and statutory powers, that all three actions deprived Anthropic of its rights without any prior notice or opportunity to respond, in breach of the Fifth Amendment and that the wider federal agency crackdown lacked any lawful authority. Anthropic sought permanent injunctions against all defendants, vacatur of the Secretarial Order and Letter, a declaration that the Presidential Directive is unconstitutional, and rescission of all implementing guidance issued across the federal government.
Original source