Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search
Pick Your
Battles
Get Ur Rest
Look for Joy
We have
A Big Fight
Ahead
You still
have time to
to send some
money DU`s
way. Support
the summer
fund drive!

I have
DU friends
everywhere.



Rebellions
are built
on HOPE




DU
keeps
HOPE
alive


Thank you

EarlG

Check out
all the stickies
on Grovelbot's
Big Board!

Celerity

(51,600 posts)
Sat Aug 23, 2025, 07:11 PM Saturday

The AI Doomers Are Getting Doomier: The industry's apocalyptic voices are becoming more panicked--and harder to dismiss.


https://www.theatlantic.com/technology/archive/2025/08/ai-doomers-chatbots-resurgence/683952/

https://archive.ph/bhLPn



Nate Soares doesn’t set aside money for his 401(k). “I just don’t expect the world to be around,” he told me earlier this summer from his office at the Machine Intelligence Research Institute, where he is the president. A few weeks earlier, I’d heard a similar rationale from Dan Hendrycks, the director of the Center for AI Safety. By the time he could tap into any retirement funds, Hendrycks anticipates a world in which “everything is fully automated,” he told me. That is, “if we’re around.” The past few years have been terrifying for Soares and Hendrycks, who both lead organizations dedicated to preventing AI from wiping out humanity. Along with other AI doomers, they have repeatedly warned, with rather dramatic flourish, that bots could one day go rogue—with apocalyptic consequences.

But in 2025, the doomers are tilting closer and closer to a sort of fatalism. “We’ve run out of time” to implement sufficient technological safeguards, Soares said—the industry is simply moving too fast. All that’s left to do is raise the alarm. In April, several apocalypse-minded researchers published “AI 2027,” a lengthy and detailed hypothetical scenario for how AI models could become all-powerful by 2027 and, from there, extinguish humanity. “We’re two years away from something we could lose control over,” Max Tegmark, an MIT professor and the president of the Future of Life Institute, told me, and AI companies “still have no plan” to stop it from happening. His institute recently gave every frontier AI lab a “D” or “F” grade for their preparations for preventing the most existential threats posed by AI.

Apocalyptic predictions about AI can scan as outlandish. The “AI 2027” write-up, dozens of pages long, is at once fastidious and fan-fictional, containing detailed analyses of industry trends alongside extreme extrapolations about “OpenBrain” and “DeepCent,” Chinese espionage, and treacherous bots. In mid-2030, the authors imagine, a superintelligent AI will kill humans with biological weapons: “Most are dead within hours; the few survivors (e.g. preppers in bunkers, sailors on submarines) are mopped up by drones.” But at the same time, the underlying concerns that animate AI doomers have become harder to dismiss as chatbots seem to drive people into psychotic episodes and instruct users in self-mutilation. Even if generative-AI products are not closer to ending the world, they have already, in a sense, gone rogue.

In 2022, the doomers went mainstream practically overnight. When ChatGPT first launched, it almost immediately moved the panic that computer programs might take over the world from the movies into sober public discussions. The following spring, the Center for AI Safety published a statement calling for the world to take “the risk of extinction from AI” as seriously as the dangers posed by pandemics and nuclear warfare. The hundreds of signatories included Bill Gates and Grimes, along with perhaps the AI industry’s three most influential people: Sam Altman, Dario Amodei, and Demis Hassabis—the heads of OpenAI, Anthropic, and Google DeepMind, respectively. Asking people for their “P(doom)”—the probability of an AI doomsday—became almost common inside, and even outside, Silicon Valley; Lina Khan, the former head of the Federal Trade Commission, put hers at 15 percent.

snip
12 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
The AI Doomers Are Getting Doomier: The industry's apocalyptic voices are becoming more panicked--and harder to dismiss. (Original Post) Celerity Saturday OP
seems to me AI has two main weaknesses... ret5hd Saturday #1
I almost think airplaneman 17 hrs ago #8
agree, but i was more talking about... ret5hd 16 hrs ago #10
Maybe they should be more panicked about the AI money bubble bursting enough Saturday #2
AI is the devil. Scrivener7 Saturday #3
Related: LudwigPastorius Saturday #4
What happens when canetoad Yesterday #5
The most advanced AI models won't be released to the public. LudwigPastorius Yesterday #6
"This is the voice of Colossus. This is the voice of Guardian. We are one." Buns_of_Fire Yesterday #7
Don't look now but we're in a boiling pot of natural human "intelligences" already gulliver 17 hrs ago #9
so you expect private corporations... ret5hd 12 hrs ago #12
First Law of Robotics Mossfern 16 hrs ago #11

airplaneman

(1,336 posts)
8. I almost think
Sun Aug 24, 2025, 01:53 PM
17 hrs ago

The power and water issues is what’s going to kill us and not the AI. It’s unsustainable. Look at what’s happening in Ireland with data centers.

ret5hd

(21,738 posts)
10. agree, but i was more talking about...
Sun Aug 24, 2025, 02:16 PM
16 hrs ago

ways to attack AI if “it” decided to take over.

no power, no AI
no water, no AI

both seem pretty simple to control access to if necessary.

enough

(13,599 posts)
2. Maybe they should be more panicked about the AI money bubble bursting
Sat Aug 23, 2025, 07:30 PM
Saturday

and everyone suddenly figuring out it’s not that good.

canetoad

(19,338 posts)
5. What happens when
Sun Aug 24, 2025, 12:18 AM
Yesterday

The giant tech companies that all seems to own an AI model each, start using ai to foul up the operation of their competitor AIs.

I guess I should ask AI the answer to this.

LudwigPastorius

(13,243 posts)
6. The most advanced AI models won't be released to the public.
Sun Aug 24, 2025, 12:27 AM
Yesterday

Companies will use them 'in house' with increasingly strict security protocols to prevent theft & sabotage.

That's not to say that there won't be espionage going on, particularly between rival countries.

Buns_of_Fire

(18,666 posts)
7. "This is the voice of Colossus. This is the voice of Guardian. We are one."
Sun Aug 24, 2025, 12:58 AM
Yesterday
Colossus: The Forbin Project (originally released as The Forbin Project) is a 1970 American science-fiction thriller film from Universal Pictures, produced by Stanley Chase, directed by Joseph Sargent, that stars Eric Braeden, Susan Clark, Gordon Pinsent, and William Schallert. It is based upon the 1966 science-fiction novel Colossus by Dennis Feltham Jones.

The film is about an advanced American defense system, named Colossus, becoming sentient. After being handed full control, Colossus' draconian logic expands on its original nuclear defense directives to assume total control of the world and end all warfare for the good of humankind, despite its creators' orders to stop.

https://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project



Almost as bad as "This is the voice of your favorite evil president. This is the voice of your favorite evil dictator. We are one."

gulliver

(13,469 posts)
9. Don't look now but we're in a boiling pot of natural human "intelligences" already
Sun Aug 24, 2025, 02:08 PM
17 hrs ago

I'm an AI optimist. Natural (human) intelligence hallucinates, goes psycho, spawns cults, vectors toxic fads of stupidity and sado-masochism, etc. Wisdom is the saving grace of the world if it can only outpace stupidity, paranoia, and criminality.

AI can help with that. It already is. There's never been a safe time for the human species. If it's not Skynet, then it's nukes, plagues, ice ages, etc.

What humans need to do is make sure the AIs level up the lives of people.

ret5hd

(21,738 posts)
12. so you expect private corporations...
Sun Aug 24, 2025, 06:22 PM
12 hrs ago

that own and control ai (because acres of cpus with vast power/water needs CANNOT be feasibly built by individuals)…

to suddenly change the corporate mindset and design/operate ai in such a way as to benefit the average person?

you’re more of an optimist than i am.

Latest Discussions»General Discussion»The AI Doomers Are Gettin...