OpenAI says it shares Anthropic's 'red lines' over military AI use
Source: NPR
By wading into the standoff between Anthropic and the Pentagon, Altman could complicate the Pentagon's efforts to replace Anthropic if it follows through on its threat to cancel the contract. OpenAI also has a Defense Department contract, along with Google, xAI, and Anthropic, but Anthropic was the first to be cleared for use on classified systems.
"I don't personally think the Pentagon should be threatening DPA against these companies," Altman told CNBC in an interview on Friday morning. He said he thinks it's important for companies to work with the military "as long as it is going to comply with legal protections" and "the few red lines" that "we share with Anthropic and that other companies also independently agree with."
"For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety, and I've been happy that they've been supporting our warfighters," Altman added. "I'm not sure where this is going to go."
In an internal note sent to staff on Thursday evening, Altman said OpenAI was seeking to negotiate a deal with the Pentagon to deploy its models in classified systems with exclusions preventing use for surveillance in the U.S. or to power autonomous weapons without human approval, according to a person familiar with the message who was not authorized to speak publicly. The Wall Street Journal first reported Altman's note to staff.
-snip-
Read more: https://www.npr.org/2026/02/27/nx-s1-5729118/anthropic-pentagon-openai-ai-weapons
SpankMe
(3,682 posts)I hope this isn't a smole screen that will dissolve over time when the news cycles on this subject fade.
highplainsdem
(61,280 posts)many customers outside the Pentagon.
And he can read polls, and he's seen Trump deteriorating.
Altman is nothing if not opportunistic.
snot
(11,646 posts)Unfortunately, that phraseology does not actually say that OpenAI shares ALL of Anthropic's red lines, just two or more.
highplainsdem
(61,280 posts)None of them are truly ethical.
But I'm glad to see some of them have some lines they're unwilling to cross.
Musk of course doesn't.
reACTIONary
(7,099 posts)... my understanding is that anthropic's red lines are exactly two:
- No wide spread domestic surveillance, and
- No fire control authority without a human in the loop.
That right there is a few, and any less in not a few.
muriel_volestrangler
(105,967 posts)The artificial intelligence startup OpenAI announced Friday night that it has reached an agreement to deploy its technology at the Pentagon while imposing the same kinds of ethical guardrails that have triggered the Trump administration to sever ties with the companys archrival Anthropic.
Tonight, we reached an agreement with the Department of War to deploy our models in their classified network, OpenAI CEO Sam Altman wrote on X, adding that the Defense Department displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.
Two of the most crucial safety principles for OpenAI are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems, Altman wrote. He said the Pentagon agrees with these principles, reflects them in law and policy, and we put them into our agreement.
He added that we are asking the [Defense Department] to offer these same terms to all AI companies.
https://www.politico.com/news/2026/02/28/openai-announces-new-deal-with-pentagon-including-ethical-safeguards-00805546?utm_source=dlvr.it&utm_medium=twitter
I assume someone's lying about what was agreed, and what is different from Anthropic's safeguards, but who know what the lies are.