Unlock the White House Watch newsletter for free
Your guide to what Trump’s second term means for Washington, business and the world
The Trump administration has drawn up tight rules for civilian artificial intelligence contracts that would require AI companies to allow “any lawful” use of their models amid a stand-off between the Pentagon and Anthropic.
A draft of new government guidelines, seen by the FT, mandates that AI groups that want to do business with the government grant the US an irrevocable licence to use their systems for all legal purposes.
The guidance from the US General Services Administration (GSA) would apply to civilian contracts and is part of a government-wide effort to strengthen procurement of AI services.
It is similar in principle to measures the Pentagon is considering for military contracts, said a person familiar with the matter.
The new conditions, drafted over the past few months, come as the defence department last week said it would rip up a $200mn contract with Anthropic after the AI company refused to give the Pentagon free rein in the use of its technology, citing concerns about domestic surveillance and lethal autonomous weaponry.
The White House also designated Anthropic a supply-chain risk, a measure usually reserved for Chinese or Russian companies.
The $380bn start-up had argued its powerful technology could be used for domestic mass surveillance if it was handed over for “all lawful use” and pushed for specific clauses to provide safeguards.
Defence secretary Pete Hegseth said the company’s “true objective” was “to seize veto power over the operational decisions of the United States military”.
The GSA guidance also mandates that contractors provide “a neutral, non-partisan tool that does not manipulate responses in favour of ideological dogmas such as diversity, equity, inclusion”. It follows an executive order from President Donald Trump targeting “woke” AI models.
“The contractor must not intentionally encode partisan or ideological judgments into the AI systems data outputs,” the draft guidance reads.
Another clause includes language intended in part to challenge compliance with the EU Digital Services Act, the person familiar added.
The clause requires AI companies to disclose whether their models have been “modified or configured to comply with any non-US federal government or commercial compliance or regulatory framework”.
The GSA, led by Ed Forst, helps procure software for the entire federal government.
Its subsidiary, the Federal Acquisition Service, run by former KKR director Josh Gruenbaum, has in the past year signed deals with leading AI companies including OpenAI, Meta, xAI and Google to provide their models at low cost to US agencies.
The GSA terminated its deal with Anthropic following the Pentagon clash.
The agency will be “soliciting further comments” from the industry before the new guidelines are finalised, the person added.
First Appeared on
Source link
Leave feedback about this