trump administration argued in a courtroom submitting Tuesday that the corporate didn’t violate Anthropic’s First Modification rights by designating the AI developer as a provide chain threat, and predicted that the corporate’s lawsuit towards the federal government would fail.
“The First Modification doesn’t authorize the federal government to unilaterally impose phrases and situations on contracts, and Antropic cites nothing to assist such a radical conclusion,” Justice Division attorneys wrote.
The response was filed in federal courtroom in San Francisco. The case is considered one of two venues the place Anthropic is difficult the Pentagon’s choice to sanction the corporate by labeling it barred from protection contracts, citing issues about potential safety vulnerabilities. Anthropic claims the Trump administration overstepped its authority to use that label and forestall its expertise from getting used inside the sector. If the designation holds, Anthropic might lose as much as billions of {dollars} in anticipated income this 12 months.
Anthropic hopes to renew enterprise as traditional till the lawsuit is resolved. Choose Rita Lin, who’s overseeing the San Francisco case, has scheduled a listening to for subsequent Tuesday to determine whether or not to honor Anthropic’s request.
In a Tuesday submitting on behalf of the Pentagon and different businesses, Justice Division attorneys stated Anthropic’s issues about potential lack of enterprise had been “legally inadequate to represent irreparable hurt” and urged Lin to disclaim the corporate a keep of sentence.
The attorneys additionally wrote that the motion was motivated by “issues about Anthropic’s potential future actions if it retained entry” to authorities expertise techniques by the Trump administration. “Nobody is claiming to limit Anthropic’s expressive actions,” they wrote.
The federal government alleges that Protection Secretary Pete Hegseth “moderately” decided that Anthropic’s efforts to restrict how the Pentagon makes use of AI expertise “might trigger Anthropic employees to interact in acts of sabotage, maliciously introduce undesirable options, or subvert the design, integrity, and operation of nationwide safety techniques.”
The Division of Protection and Anthropic are battling over potential limitations on the corporate’s Claude AI mannequin. Anthropic believes its mannequin shouldn’t be used to facilitate widespread surveillance of Americans and isn’t presently dependable sufficient to energy absolutely autonomous weapons.
Authorized consultants beforehand advised WIRED that Anthropic has a powerful case that its provide chain actions quantity to illegal retaliation. However courts have usually upheld the federal government’s nationwide safety claims, and Pentagon officers have described Anthropic as an unscrupulous vendor and its expertise as untrustworthy.
“Specifically, DoW was involved that permitting Anthropic continued entry to DoW’s technical and operational fight infrastructure would introduce unacceptable dangers to DoW’s provide chain,” Tuesday’s submitting states. “AI techniques are extremely weak to manipulation, and if Anthropic determines that an organization’s ‘crimson line’ has been crossed, it might disable its expertise or preemptively change the habits of its fashions earlier than or throughout ongoing fight operations.”
The Division of Protection and different federal businesses are working to exchange Anthropic’s AI instruments with competing tech firms’ merchandise within the coming months. One of many navy’s major makes use of for Claude is thru Palantir’s information evaluation software program, folks conversant in the matter advised WIRED.
In Tuesday’s submitting, attorneys argued that the Pentagon’s “categorised techniques and high-intensity fight operations underway” “preclude us from merely flipping a change when Anthropic is presently the one AI mannequin licensed to be used.” The division is engaged on introducing AI techniques from Google, OpenAI, and xAI as options.
Various firms and organizations have filed courtroom briefs in assist of Anthropic, together with AI researchers, Microsoft, federal worker unions, and former navy leaders. Nothing has been submitted to help the federal government.
Antropic has till Friday to submit a rebuttal to the federal government’s claims.

