Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

usonian

(22,774 posts)
Sun Nov 30, 2025, 10:43 PM 11 hrs ago

How AI Threats Have Broken Strong Authentication

It's software written by geniuses. Nothing could POSSIBLY go wrong!

https://securityboulevard.com/2025/11/how-ai-threats-have-broken-strong-authentication/

TLDR: AI trashes ALL authentication. They done moved fast and broke everything.THANKS!

Identity security has reached a tipping point. Stronger locks are no longer enough when adversaries can look, sound and even behave like authorized users. Let’s face it, traditional strong authentication methods like MFA and biometrics are just another deadbolt. The real challenge isn’t letting in users who present a valid credential; it’s proving, beyond a doubt, that the person on the other side of the door is who they claim to be.

Here’s the core issue. Modern attackers don’t just steal credentials; they attack the entire authentication process. Techniques like deepfakes, adversary-in-the-middle phishing, SIM swaps and push-notification fatigue show that MFA factors—whether “something you know,” “something you have,” or “something you are”—can be intercepted, spoofed, or socially engineered. With so many authentication factors vulnerable, what’s a reliable way to prove identity?
The Limits of “Something You Are”

Biometric authentication falls under the “inherence” factor; it uses unique biological traits like fingerprints, facial geometry, or iris patterns to verify identity. At first glance, biometrics seem well-suited to preventing phishing or credential theft: They can’t be guessed, forgotten, or phished. However, this is only true if the system can ensure that the biometric sample is coming from the correct person, in real time and through a secure channel.

Today’s AI-powered deepfakes make deception more challenging than ever. Presentation attacks, where a malicious actor tries to fool a sensor with a photo, video, mask, or synthetic voice, are no longer just theoretical. They are now available as a service. Injection attacks can even bypass the camera entirely by feeding a fake video stream into the device. Without advanced, certified presentation attack detection (PAD) and anti-spoofing measures, a biometric system can be compromised without the attacker ever being physically present.


More heart-warming news at the link.

And here's a screensnap of recommendations to circunvent the circumventors.



EASY AS QUANTUM COSMOLOGY!!!

2 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
How AI Threats Have Broken Strong Authentication (Original Post) usonian 11 hrs ago OP
I love this stuff. Thanks so much for sharing, usonian. ❤️ littlemissmartypants 5 hrs ago #1
📌 Identity security. Bookmark. littlemissmartypants 5 hrs ago #2
Latest Discussions»Help & Search»Computer Help and Support»How AI Threats Have Broke...