Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

unblock

(55,399 posts)
2. bigotry in this type of ai shouldn't come as a surprise. it's simplistic pattern recognition without any science.
Sat Jul 12, 2025, 03:47 PM
Jul 12

human bigots do the same thing. their bigotry comes from simple-minded algorithms looking for patterns and correlations, then jump to conclusions without any thought as to the logic or merit or proportion or the distinction between correlation and causality or testing or verification or the fallacy of applying the general to a specific.

a simple-minded bigot hears as few stories about a negative event or trait, hears that the person involved in each case has a certain irrelevant trait (same religion, race, whatever), and jumps to the conclusion that that irrelevant train is a solid predictor of the negative event or trait. then they run with problem-solving based on the assumption that that general rule applies to anyone with that irrelevant trait.

these ai models are currently programmed with similarly simple-minded algorithms. one can hope that the programmers will make them better over time, but this is where we are at the moment.

ai gathers its "training data" from massive amounts of random internet garbage, with little sense of any kind of authority, scientific merit, verification, etc. information from a scientific study that won prizes and is cited many times in other scientific literature is valued less than a tweet from an "influencer" with a million followers, because in all likelihood, the only sense of relative "value" an ai model can infer from the data it scrapes is how many internet links point to it or likes it gets. so popularity of a view is more important, if anything is, than scientific merit.

similarly, it probably can't really distinguish between fact and opinion. how could it. even if i post "i think x", it's not clear if i'm saying "x is my opinion" or "x is a fact, but i'm not 100% certain it's true", or even if i'm just being polite and know damn well "x is a fact, i'm 100% certain it's true, but i don't want to come across as insulting or arrogant, so i'm politely hedging."

i think other developers have likely had to put in explicit coding to say "don't look for patterns regarding race, religion, etc." but the way it looks for patterns inherently makes the same mistakes a bigot makes.


Recommendations

2 members have recommended this reply (displayed in chronological order):

Latest Discussions»Alliance Forums»Jewish Group»(JEWISH GROUP) Elon Musk'...»Reply #2