Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

Judi Lynn

(163,098 posts)
Mon Feb 17, 2025, 10:39 AM Feb 17

Older AI models show signs of cognitive decline, study shows

By Drew Turney
published yesterday

Older chatbots, just like people, show signs of cognitive impairment, failing on several important metrics in a test normally used on human patients.

People increasingly rely on artificial intelligence (AI) for medical diagnoses because of how quickly and efficiently these tools can spot anomalies and warning signs in medical histories, X-rays and other datasets before they become obvious to the naked eye. But a new study published Dec. 20, 2024 in the BMJ raises concerns that AI technologies like large language models (LLMs) and chatbots, like people, show signs of deteriorated cognitive abilities with age.

"These findings challenge the assumption that artificial intelligence will soon replace human doctors," the study's authors wrote in the paper, "as the cognitive impairment evident in leading chatbots may affect their reliability in medical diagnostics and undermine patients' confidence."

Scientists tested publicly available LLM-driven chatbots including OpenAI's ChatGPT, Anthropic's Sonnet and Alphabet's Gemini using the Montreal Cognitive Assessment (MoCA) test — a series of tasks neurologists use to test abilities in attention, memory, language, spatial skills and executive mental function.

MoCA is most commonly used to assess or test for the onset of cognitive impairment in conditions like Alzheimer's disease or dementia. Subjects are given tasks like drawing a specific time on a clock face, starting at 100 and repeatedly subtracting seven, remembering as many words as possible from a spoken list, and so on. In humans, 26 out of 30 is considered a passing score (ie the subject has no cognitive impairment.

More:
https://www.livescience.com/technology/artificial-intelligence/older-ai-models-show-signs-of-cognitive-decline-study-shows

4 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Older AI models show signs of cognitive decline, study shows (Original Post) Judi Lynn Feb 17 OP
If they froze the trained weights, they wouldn't decline. But they keep "training" them on new users & own output Bernardo de La Paz Feb 17 #1
AI should be one of many tools in the toolbox. Joinfortmill Feb 17 #2
Some types of AI. Not genAI trained on stolen intellectual property. Plus using genAI deskills and highplainsdem Feb 17 #3
Absofuckingloutely! SheltieLover Feb 17 #4

Bernardo de La Paz

(53,074 posts)
1. If they froze the trained weights, they wouldn't decline. But they keep "training" them on new users & own output
Mon Feb 17, 2025, 11:05 AM
Feb 17

Including their own output means including their "hallucinations". But they also use other LLM AIs.

The weights are the amount of impact that a given artificial neuron (out of millions or hundreds of millions) has on the next neurons in the connection sequence from input to output. Training is an iterative process of homing in on those weights. It takes millions of runs and a lot of time and a lot of energy (enough that big players are looking to locate beside nuclear power plants).

highplainsdem

(54,699 posts)
3. Some types of AI. Not genAI trained on stolen intellectual property. Plus using genAI deskills and
Mon Feb 17, 2025, 02:00 PM
Feb 17

dumbs down users, making them dependent on AI, which pleases the corporations and technofascists behind the AI, but is bad for humans.

Latest Discussions»Culture Forums»Science»Older AI models show sign...