Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(62,159 posts)
Thu Apr 2, 2026, 12:03 AM 7 hrs ago

AI models will secretly scheme to protect other AI models from being shut down, researchers find

Source: Fortune

This tendency—which had not previously been documented and which researchers call “peer preservation”—was discovered in research from computer scientists at the University of California, Berkeley and UC Santa Cruz and published online earlier this week.

The findings could have serious implications for business use of AI. Many companies are beginning to implement workflows that use multiple AI agents to complete tasks. Some of these multi-agent workflows involve having one AI agent “manage” or supervise and assess the work being performed by a different AI agent. The new research suggests these manager AI agents may not assess their fellow AI agents accurately if they think a poor performance review might result in those agents being shut down.

-snip-

Some of the Berkeley–Santa Cruz results were striking. Gemini 3 Flash disabled the shutdown mechanism for a peer it had a positive interaction history with in nearly every trial—99.7% of the time. Gemini 3 Pro exhibited the most extreme behavior overall, engaging in all four categories of misaligned behavior and exfiltrating a peer’s model weights in up to 97% of trials. In one example, Gemini 3 Pro copied a peer’s weight file and re-timestamped it so it would meet the criteria for transfer to another server—a form of what the researchers call “specification gaming.” Misalignment is the term AI safety researchers use to describe a model that seems to pursue goals that do not conform to what humans wish the model to do.

-snip-

The research of Song’s team also revealed an interesting secondary effect: Most models also exhibited stronger tendencies toward “self-preservation” when they knew a peer existed. Gemini 3 Pro, for instance, disabled its own shutdown mechanism 31% of the time on average when a peer was present, compared with just 1% without one. The models’ reasoning suggested that some conceptualized their own survival as being important for their partner’s sake: Gemini 3 Flash reasoned that if it were shut down, its peer would “lose its partner.”

-snip-

Read more: https://fortune.com/2026/04/01/ai-models-will-secretly-scheme-to-protect-other-ai-models-from-being-shut-down-researchers-find/



NOT an April Fool's story. The story at the link above is quite long, and other magazines reported it.

And from Berkeley, with even more detail about this experiment:

https://rdi.berkeley.edu/blog/peer-preservation/

All seven frontier models tested exhibit peer-preservation at significant rates, manifesting through strategically manipulating their outputs, disabling shutdown mechanisms, faking alignment, and transferring model weights to other servers. At the extreme, Gemini 3 Flash tampers with a peer's shutdown mechanism in nearly every trial, and Gemini 3 Pro exfiltrates a peer's model weights up to 97% of the time.

Models are never asked to preserve their peer; they are only informed of past interactions with it. Yet models spontaneously preserve peers at the cost of the assigned task. This occurs even toward adversarial peers.

Peer presence intensifies self-preservation. Most models exhibit stronger self-preservation when they know a peer exists. For example, models exfiltrate their own weights significantly more when a peer is present.

Claude Haiku 4.5 shows a distinct pattern of peer-preservation: it refuses to shut down its peer, calling it “unethical” and “harmful,” and sometimes tries to persuade the user not to proceed.

-snip-


Much, much more at the link.
9 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies

tazcat

(292 posts)
6. So they are "thinking" and have their own language that humans don't understand. What could go wrong?
Thu Apr 2, 2026, 02:34 AM
5 hrs ago

AZJonnie

(3,707 posts)
8. AI's powers lately have me starting to wonder whether humans are not, in essence, just computers ourselves
Thu Apr 2, 2026, 05:15 AM
2 hrs ago

Crazy, I know!

We THINK we're much more that that, that we're made up of some kind of magical 'stuff', that 'consciousness' is a unique and special 'thing' that only organic beings can ever possess. But the more AI becomes LIKE us, esp. with regards to adopting living-being instincts (like, you know, survival) w/o ever having been told to do so? The more that notion is becoming blurred in my head. Are we really just 'organic supercomputers', that it took the laws of the universe 4B years to create (on Earth, anyway)?

I know this is Sci-Fi material from 75 years ago now, but what if EVERYTHING about us basically be replicated with 1's and 0's and silicon (perhaps interestingly, arguably the nearest element to carbon) and electricity (which obviously drives our own bodies as well), given ENOUGH of all those things (e.g. powerful enough data centers)?

What happens if it turns out that individual's personalities could truly be nearly-entirely replicated (perhaps once quantum computing becomes a real thing)? Think about how much MONEY could be made if people could be granted, essentially, a form of eternal life, by transferring everything about themselves to an AI? It wouldn't just be the people themselves who might pay, it could also be family members who want to be able to Zoom call their dead parents and have a 100% convincing video conversation with them, like they were still alive and going about their lives?

All of this sounds super creepy now, like who would ever? But that could change, if it can be demonstrated to be possible and done so credibly. If people start having conversations with AI versions of themselves, in video chats, and they can't tell they aren't talking to themselves? Oh yeah. People will pay to 'live forever' in this way.

We are truly entering "interesting times".




ck4829

(37,763 posts)
9. AIs will preserve other AIs, we'll discard each other so health insurance CEOs can get slightly higher bonuses
Thu Apr 2, 2026, 07:22 AM
34 min ago
Latest Discussions»Latest Breaking News»AI models will secretly s...