Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(62,532 posts)
Thu Apr 16, 2026, 12:59 AM 14 hrs ago

A.I. Has a Message Problem of Its Own Making (The New Yorker, 4/15)

I posted about that message problem myself two days ago -
https://www.democraticunderground.com/100221172550 - and yesterday, after Brian Merchant wrote about the problem, I posted about his comments: https://www.democraticunderground.com/100221174978 .

Today the New Yorker has a piece on it:

https://www.newyorker.com/culture/infinite-scroll/ai-has-a-message-problem-of-its-own-making

-snip-

On Friday evening, Altman wrote a post on his personal blog acknowledging the incident and included a photograph of his husband and child, appealing to a shared sense of humanity. He alluded to a recent “incendiary article,” presumably The New Yorker’s investigation, by my colleagues Andrew Marantz and Ronan Farrow, exposing Altman’s pattern of deceptive leadership at OpenAI. “We should de-escalate the rhetoric and tactics,” Altman wrote. What he failed to acknowledge is that much of the heightened, sometimes glibly apocalyptic rhetoric about the powers of A.I. has come from within the industry itself and, indeed, straight from his own mouth. (To quote just one indelible line, from 2015, “I think A.I. will probably most likely lead to the end of the world, but in the meantime there’ll be great companies created with serious machine learning.”) Even in his recent blog post, Altman wrote that “the fear and anxiety about AI is justified; we are in the process of witnessing the largest change to society in a long time, and perhaps ever.” Who, exactly, does he think is to blame for stoking hysteria? If you tell people often enough that your product is going to upend their way of life, take their jobs, and very possibly pose an existential threat to humanity, they just might start to believe you. A recent Gallup survey of Gen Z found that forty-two per cent of respondents felt “anxiety” about A.I. and thirty-one per cent felt “anger.”

The messaging behind A.I. companies has always relied on a self-serving paradox: the technology under development is so potentially dangerous that the public’s only choice is to put blind faith in the handful of opaque businesses rapidly developing it. (Or, as the Onion recently put it, “Sam Altman: ‘If I Don’t End the World, Someone Far More Dangerous Will.’ ”) It’s become increasingly clear that the corporate machinations of A.I. founders influence how our economy grows, how we fight wars, and how political messaging spreads, and that the founders expect to oversee A.I.’s societal transformations with only self-determined levels of transparency. The economics writer Noah Smith recently wondered whether A.I. executives might become “de facto emperors of the world.” This month, OpenAI released an industrial-policy plan that proclaims its intention to “keep people first” in the age of A.I. The document calls for sweeping systemic changes including a public wealth fund invested in the success of A.I.; a pivot toward the “care and connection economy” to bolster jobs, such as elder care, that are less likely to become outmoded by A.I.; and social benefits that are not tied to employers (presumably because employment itself will be a less sure bet once bots become truly “agentic”). The paper’s tone is patronizing at best, professing concern that the “economic gains” from A.I. could “concentrate within a small number of firms like OpenAI,” as if that isn’t exactly what is already happening by design.

There is a persistent delusion of grandeur among those leading the A.I. charge. In his blog post, Altman wrote, without apparent irony, that the prospect of controlling artificial general intelligence was like the “ring of power” from “The Lord of the Rings”: it “makes people do crazy things.” OpenAI’s main rival, Anthropic, has marketed itself as the industry’s safety-minded good guys. Its co-founder and C.E.O., Dario Amodei, originally left OpenAI owing to safety concerns, and he recently broke with the United States Department of Defense over the military’s use of A.I. in operating fully autonomous weapons, among other issues. But Anthropic, like OpenAI, is on the verge of an astronomical I.P.O., and it can be hard to disentangle the company’s marketing hype from its genuine safety concerns. Last week, Anthropic announced that its new model, Mythos, is too powerful to be released to the public and unveiled Project Glasswing, an effort to give certain companies and organizations, including Amazon, Cisco, JPMorgan Chase, and the U.S. government, early access to Mythos as a “head start” in preparing for the cybersecurity threats that the model poses. Early tests now being made public seem to justify Anthropic’s alarm: the AI Security Institute, a British government organization, found that Mythos could autonomously “execute multi-stage attacks on vulnerable networks” which would “take human professionals days of work.” The only way to fight the threats of A.I. is with more A.I., of course: Michael Cembalest, the chair of the Market and Investment Strategy group at JPMorgan, wrote, in a blog post about Project Glasswing, that Anthropic at times “feels like an arsonist selling fire extinguishers.”

-snip-

Perhaps in response to the growing unease, A.I. companies have lately been undertaking various other efforts to appear more high-minded. Following the lead of Anthropic, Google DeepMind recently hired an in-house philosopher, and Anthropic convened a meeting of Christian leaders to discuss its chatbot’s moral orientation. A more effective strategy might be for A.I. executives to stop appointing themselves as the only arbiters of safety, to stop asking for blind faith, and to start fostering a system of external accountability, with input and involvement from the public. Tech companies proposing ways to reshape the government is a soft form of techno-fascism that alienates citizens; if A.I. requires a new social contract or a new political hierarchy, then its shape should not be up to the corporations to determine. There is another troubling paradox behind A.I. founders’ messaging: If the technology is as formidable as they claim, then they could be leading us toward existential disaster; if the technology proves less transformative, and thus less valuable than the hype suggests, then they are merely setting us up for global economic disaster. For those of us who aren’t self-appointed heroes of the artificial-intelligence movement, neither scenario is particularly appealing.


Latest Discussions»General Discussion»A.I. Has a Message Proble...