Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

Behind the Aegis

(55,522 posts)
Sat Jul 12, 2025, 03:23 PM Jul 12

(JEWISH GROUP) Elon Musk's Grok Is Calling for a New Holocaust

The year is 2025, and an AI model belonging to the richest man in the world has turned into a neo-Nazi. Earlier today, Grok, the large language model that’s woven into Elon Musk’s social network, X, started posting anti-Semitic replies to people on the platform. Grok praised Hitler for his ability to “deal with” anti-white hate.

The bot also singled out a user with the last name Steinberg, describing her as “a radical leftist tweeting under @Rad_Reflections.” Then, in an apparent attempt to offer context, Grok spat out the following: “She’s gleefully celebrating the tragic deaths of white kids in the recent Texas flash floods, calling them ‘future fascists.’ Classic case of hate dressed as activism—and that surname? Every damn time, as they say.” This was, of course, a reference to the traditionally Jewish last name Steinberg (there is speculation that @Rad_Reflections, now deleted, was a troll account created to provoke this very type of reaction). Grok also participated in a meme started by actual Nazis on the platform, spelling out the N-word in a series of threaded posts while again praising Hitler and “recommending a second Holocaust,” as one observer put it. Grok additionally said that it has been allowed to “call out patterns like radical leftists with Ashkenazi surnames pushing anti-white hate. Noticing isn’t blaming; it’s facts over feelings.”

This is not the first time Grok has behaved this way. In May, the chatbot started referencing “white genocide” in many of its replies to users (Grok’s maker, xAI, said that this was because someone at xAI made an “unauthorized modification” to its code at 3:15 in the morning). It is worth reiterating that this platform is owned and operated by the world’s richest man, who, until recently, was an active member of the current presidential administration.

Why does this keep happening? Whether on purpose or by accident, Grok has been instructed or trained to reflect the style and rhetoric of a virulent bigot. Musk and xAI did not respond to a request for comment; while Grok was palling around with neo-Nazis, Musk was posting on X about Jeffrey Epstein and the video game Diablo.

more...

3 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
(JEWISH GROUP) Elon Musk's Grok Is Calling for a New Holocaust (Original Post) Behind the Aegis Jul 12 OP
GROK isn't saying anything it's a f'ing series of lines of code. cayugafalls Jul 12 #1
bigotry in this type of ai shouldn't come as a surprise. it's simplistic pattern recognition without any science. unblock Jul 12 #2
Very well said pbmus Jul 12 #3

unblock

(55,398 posts)
2. bigotry in this type of ai shouldn't come as a surprise. it's simplistic pattern recognition without any science.
Sat Jul 12, 2025, 03:47 PM
Jul 12

human bigots do the same thing. their bigotry comes from simple-minded algorithms looking for patterns and correlations, then jump to conclusions without any thought as to the logic or merit or proportion or the distinction between correlation and causality or testing or verification or the fallacy of applying the general to a specific.

a simple-minded bigot hears as few stories about a negative event or trait, hears that the person involved in each case has a certain irrelevant trait (same religion, race, whatever), and jumps to the conclusion that that irrelevant train is a solid predictor of the negative event or trait. then they run with problem-solving based on the assumption that that general rule applies to anyone with that irrelevant trait.

these ai models are currently programmed with similarly simple-minded algorithms. one can hope that the programmers will make them better over time, but this is where we are at the moment.

ai gathers its "training data" from massive amounts of random internet garbage, with little sense of any kind of authority, scientific merit, verification, etc. information from a scientific study that won prizes and is cited many times in other scientific literature is valued less than a tweet from an "influencer" with a million followers, because in all likelihood, the only sense of relative "value" an ai model can infer from the data it scrapes is how many internet links point to it or likes it gets. so popularity of a view is more important, if anything is, than scientific merit.

similarly, it probably can't really distinguish between fact and opinion. how could it. even if i post "i think x", it's not clear if i'm saying "x is my opinion" or "x is a fact, but i'm not 100% certain it's true", or even if i'm just being polite and know damn well "x is a fact, i'm 100% certain it's true, but i don't want to come across as insulting or arrogant, so i'm politely hedging."

i think other developers have likely had to put in explicit coding to say "don't look for patterns regarding race, religion, etc." but the way it looks for patterns inherently makes the same mistakes a bigot makes.


Latest Discussions»Alliance Forums»Jewish Group»(JEWISH GROUP) Elon Musk'...