General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsThe Deepfake Nudes Crisis in Schools Is Much Worse Than You Thought
Last edited Thu Apr 16, 2026, 10:08 AM - Edit history (1)
Wired
https://www.wired.com/story/deepfake-nudify-schools-global-crisis/
I forgot to clear cookies, so I exceeded the free article limit.
https://archive.ph/ZSW1a
An analysis by WIRED and Indicator found nearly 90 schools and 600 students around the world impacted by AI-generated deepfake nude imagesand the problem shows no signs of going away.
The true scale of deepfake sexual abuse taking place in schools is likely much higher.
Lonnnnng Hacker News discussion;
https://news.ycombinator.com/item?id=47779856
The deepfake crisis hitting schools started slowly a couple of years ago, but it has since grown considerably as the technology used to create the explicit imagery has become more accessible. Deepfake sexual abuse incidents have hit around 90 schools globally and have impacted more than 600 pupils, according to a review of publicly reported incidents by WIRED and Indicator, a publication focusing on digital deception and misinformation.
The findings show that since 2023, schoolchildrenmost often boys in high schoolsin at least 28 countries have been accused of using generative AI to target their classmates with sexualized deepfakes. The explicit imagery, containing minors, is considered to be child sexual abuse material (CSAM). This analysis is believed to be the first to review real-world cases of AI deepfake abuse taking place at schools globally.
As a whole, the analysis shows the worldwide reach of harmful AI nudification technology, which can earn their creators millions of dollars per year, and shows that in many incidents, schools and law enforcement officials are often not prepared to respond to the serious sexual abuse incidents.

Ruining schoolkids' lives? WE DON'T GIVE A FLYING FU*K
And thanks for all the money, assholes.
Update.
Apple Threatened to Pull Grok From App Store Over Sexualized Images
What followed was a confusing rollout of moderation changes to Grok, some of which could be easily bypassed. Publicly, Apple did not comment on the controversy at the time, but it did respond, and was in fact the instigator of the changes. Internally, the company had found both X and Grok in violation of its App Store guidelines and demanded its developers submit a content moderation plan, the letter reveals.
According to the letter, Apple rejected an initial fix from xAI as insufficient, saying the "changes didn't go far enough," and Apple warned it that additional alterations were required or Grok would be removed. After further back-and-forth, however, Apple eventually concluded that a later submission of the app had improved enough for it to be approved.
snip
Lying sicko X (Musk) said:
"We strictly prohibit users from generating non-consensual explicit deepfakes and from using our tools to undress real people. xAI has extensive safeguards in place to prevent such misuse, such as continuous monitoring of public usage, analysis of evasion attempts in real time, frequent model updates, prompt filters, and additional safeguards."
Safeguards are easily bypassed with creative prompts.
Source: https://www.macrumors.com/2026/04/15/apple-threatened-pull-grok-from-app-store/
The original story at NBC is paywalled.
MONEY MONEY MONEY.
lapfog_1
(31,931 posts)are, by and large, incel adolescent 13 year old boys... which is why they don't have a problem with this.
Of course, one doesn't need AI to do this, AI just makes it trivial to accomplish
pat_k
(13,479 posts)Both users and creators of these "tools" MUST be held accountable.
Time to find model laws that that actually work. I don't know enough about the Take it Down Act to judge whether it's can actually address this shit.
Jane Doe v. AI/Robotics Venture Strategy 3 LTD, a lawsuit against ClothOff is described here:
https://law.yale.edu/yls-today/news/clinics-file-suit-against-website-generates-nonconsensual-nude-images
It appears to be running into significant obstacles on basics like service of process and international jurisdiction.
The SF City attorney has shutdown at least 10 sites:
https://sfcityattorney.org/city-attorney-shuts-down-10-websites-that-create-nonconsensual-deepfake-pornography/
So there is some progress, but it seems to be waaaayyyy behind.
Skittles
(172,089 posts)I would deep-fake the offenders with really tiny penises; yes INDEED
flvegan
(66,347 posts)If you get my drift.
ms liberty
(11,281 posts)mopinko
(73,788 posts)zucky made every fb user a digital creator and pushed them to sign up to get pennies to post. troll farms flourished.
its a pox on humanity.
usonian
(25,807 posts)So he could rate women on campus.
He has an army of bodyguards and a bunker. Nice life.
RoseTrellis
(181 posts)Perhaps if there was a requirement for age/identity requirements for users this would solve the problem.
Anonymity emboldens people to shit like this, if they knew there would be accountability I bet these apps/websites would cease to exist.
usonian
(25,807 posts)There is no socially responsible use, and they were made available to millions.
They always have a "target"
Even if usage is logged, recourse is
After permanent damage is done, and
On an impossibly massive scale, burden on the victim.