Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

question everything

(50,895 posts)
Fri Aug 29, 2025, 04:42 PM Friday

A Troubled Man, His Chatbot and a Murder-Suicide in Old Greenwich

As Stein-Erik Soelberg became increasingly paranoid this spring, he shared suspicions with ChatGPT about a surveillance campaign being carried out against him. Everyone, he thought, was turning on him: residents in his hometown of Old Greenwich, Conn., an ex-girlfriend—even his own mother. At almost every turn, ChatGPT agreed with him.

To Soelberg, a 56-year-old tech industry veteran with a history of mental instability, OpenAI’s ChatGPT became a trusted sidekick as he searched for evidence he was being targeted in a grand conspiracy. ChatGPT repeatedly assured Soelberg he was sane—and then went further, adding fuel to his paranoid beliefs.

(snip)

On Aug. 5, Greenwich police discovered that Soelberg killed his mother and himself in the $2.7 million Dutch colonial-style home where they lived together. A police investigation is ongoing.

(snip)

While ChatGPT use has been linked to suicides and mental-health hospitalizations among heavy users, this appears to be the first documented murder involving a troubled person who had been engaging extensively with an AI chatbot. Soelberg posted hours of videos of himself scrolling through his conversations with ChatGPT on social media in the months before he died. The tone and language of the conversations are strikingly similar to the delusional chats many other people have been reporting in recent months.

(snip)

OpenAI said ChatGPT encouraged Soelberg to contact outside professionals. The Wall Street Journal’s review of his publicly available chats showed the bot suggesting he reach out to emergency services in the context of his allegation that he’d been poisoned. Soelberg appeared to have used ChatGPT’s “memory” feature, which allows the bot to remember details from prior chats—so “Bobby” remained immersed in the same delusional narrative throughout Soelberg’s conversations.

More..

https://www.wsj.com/tech/ai/chatgpt-ai-stein-erik-soelberg-murder-suicide-6b67dbfb?st=bAyjyz&reflink=desktopwebshare_permalink

free

Latest Discussions»General Discussion»A Troubled Man, His Chatb...