Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
General Discussion
Showing Original Post only (View all)The AI Doomers Are Getting Doomier: The industry's apocalyptic voices are becoming more panicked--and harder to dismiss. [View all]
https://www.theatlantic.com/technology/archive/2025/08/ai-doomers-chatbots-resurgence/683952/
https://archive.ph/bhLPn

Nate Soares doesnt set aside money for his 401(k). I just dont expect the world to be around, he told me earlier this summer from his office at the Machine Intelligence Research Institute, where he is the president. A few weeks earlier, Id heard a similar rationale from Dan Hendrycks, the director of the Center for AI Safety. By the time he could tap into any retirement funds, Hendrycks anticipates a world in which everything is fully automated, he told me. That is, if were around. The past few years have been terrifying for Soares and Hendrycks, who both lead organizations dedicated to preventing AI from wiping out humanity. Along with other AI doomers, they have repeatedly warned, with rather dramatic flourish, that bots could one day go roguewith apocalyptic consequences.
But in 2025, the doomers are tilting closer and closer to a sort of fatalism. Weve run out of time to implement sufficient technological safeguards, Soares saidthe industry is simply moving too fast. All thats left to do is raise the alarm. In April, several apocalypse-minded researchers published AI 2027, a lengthy and detailed hypothetical scenario for how AI models could become all-powerful by 2027 and, from there, extinguish humanity. Were two years away from something we could lose control over, Max Tegmark, an MIT professor and the president of the Future of Life Institute, told me, and AI companies still have no plan to stop it from happening. His institute recently gave every frontier AI lab a D or F grade for their preparations for preventing the most existential threats posed by AI.
Apocalyptic predictions about AI can scan as outlandish. The AI 2027 write-up, dozens of pages long, is at once fastidious and fan-fictional, containing detailed analyses of industry trends alongside extreme extrapolations about OpenBrain and DeepCent, Chinese espionage, and treacherous bots. In mid-2030, the authors imagine, a superintelligent AI will kill humans with biological weapons: Most are dead within hours; the few survivors (e.g. preppers in bunkers, sailors on submarines) are mopped up by drones. But at the same time, the underlying concerns that animate AI doomers have become harder to dismiss as chatbots seem to drive people into psychotic episodes and instruct users in self-mutilation. Even if generative-AI products are not closer to ending the world, they have already, in a sense, gone rogue.
In 2022, the doomers went mainstream practically overnight. When ChatGPT first launched, it almost immediately moved the panic that computer programs might take over the world from the movies into sober public discussions. The following spring, the Center for AI Safety published a statement calling for the world to take the risk of extinction from AI as seriously as the dangers posed by pandemics and nuclear warfare. The hundreds of signatories included Bill Gates and Grimes, along with perhaps the AI industrys three most influential people: Sam Altman, Dario Amodei, and Demis Hassabisthe heads of OpenAI, Anthropic, and Google DeepMind, respectively. Asking people for their P(doom)the probability of an AI doomsdaybecame almost common inside, and even outside, Silicon Valley; Lina Khan, the former head of the Federal Trade Commission, put hers at 15 percent.
snip
12 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies
