OpenAI CEO Sam Altman addresses the gathering at the AI Impact Summit, in New Delhi, India, February 19, 2026.
Bhawika Chhabra | Reuters
OpenAI CEO Sam Altman said Monday that the company “shouldn’t have rushed” its recent deal with the U.S. Department of Defense and outlined revisions to the agreement.
Altman shared what he described as a repost of an internal memo on X, saying the company would amend the contract to include some new language, adding that “the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”
It comes after the ChatGPT maker announced it had struck a new deal with the Defense Department on Friday, just hours after U.S. President Donald Trump directed federal agencies to stop using rival AI company Anthropic’s tools, and hours before Washington would carry out strikes on Iran.
He added that the Defense Department had affirmed that OpenAI’s tools would not be used by intelligence agencies such as the NSA.
“There are many things the technology just isn’t ready for, and many areas we don’t yet understand the tradeoffs required for safety,” Altman said, adding that the company would work with the Pentagon on technical safeguards.
The CEO also admitted he had made a mistake and “shouldn’t have rushed” to get the deal out on Friday.
“We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy,” he said.
The acknowledgment comes after a public feud between Anthropic and Washington over safeguards for its Claude AI systems, which ended without an agreement. Defense Secretary Pete Hegseth said Friday the company would be designated a supply-chain threat.
Following an initial deal last year, Anthropic was the first AI lab to deploy its models across the Defense Department’s classified network.
The company had later sought guarantees that its tools would not be used for purposes such as domestic surveillance in the U.S., or to operate and develop autonomous weapons without human control.
The dispute began after it was revealed that Anthropic’s Claude had been used by the U.S. military in its raid to capture Venezuelan president Nicolás Maduro in January, though the company did not publicly object to that use case.
OpenAI’s deal with the Pentagon came right after talks between Anthropic and the Defense Department broke down, though Altman had told employees in a Thursday memo that OpenAI shared the same “red lines” as Anthropic. He said in a post Friday that the Defense Department agreed to the company’s restrictions.
It remains unclear why the Defense Department agreed to accommodate OpenAI and not Anthropic, though government officials have for months criticized Anthropic for allegedly being overly concerned with AI safety.
The timing of OpenAI’s deal with the Defense Department prompted online backlash, with many users reportedly ditching ChatGPT for Claude on app stores.
In his post, Altman further addressed the controversy, saying: “In my conversations over the weekend, I reiterated that Anthropic should not be designated as a [supply chain risk], and that we hope the [Department of Defense] offers them the same terms we’ve agreed to.”
Anthropic was founded in 2021 by a group of former OpenAI staff and researchers, including Dario Amodei, who left the company after disagreements over its direction. The company has marketed itself as a “safety-first” alternative.
First Appeared on
Source link
Leave feedback about this