LLMs killed the privacy star, we can't rewind, we've gone too far
The Register
February 26, 2026
AI-Generated Deep Dive Summary
Large language models (LLMs) are reshaping the landscape of online privacy by making it easier than ever to deanonymize individuals, even those who use pseudonyms. Researchers have demonstrated that LLMs can automate the process of connecting anonymous data points across online posts, identifying users with high precision and at scale. This breakthrough builds on decades of academic research into online privacy, including Latanya Sweeney's seminal work on k-Anonymity, which showed how three simple data points—like ZIP code, gender, and date of birth—could identify 87% of the U.S. population. Now, LLMs are taking this risk to a new level by efficiently searching through unstructured text and linking anonymous posts to real identities.
The study highlights that while it has long been theoretically possible to identify individuals using a few data points, the practical challenges of doing so have historically limited its feasibility. Human investigators often faced significant time and effort in piecing together identities from scattered online information. However, LLMs are transforming this process by accelerating and automating it. According to Simon Lermen, an AI engineer at MATS Research, LLMs can extract identity-relevant signals from arbitrary text, search millions of candidate profiles, and determine if two accounts belong to the same person—all with remarkable accuracy.
In a recent experiment, researchers tested their method on 338 Hacker News users, achieving a 67% success rate in identifying individuals. This demonstrates that while the technique is not foolproof, it is effective enough to pose significant risks to online privacy. The study also revealed that the cost of running such experiments is low—around $2,000 for the entire experiment, with an estimated $1-$4 per profile. This affordability raises concerns about how governments and corporations might exploit this technology, potentially targeting activists, journalists, or even creating highly personalized advertising profiles.
The implications of this research are profound. For individuals who rely on pseudonyms to protect their privacy—whether for personal safety, activism, or professional reasons—the loss of anonymity could have far-reaching consequences. The study underscores the urgent need for better privacy protections and ethical guidelines in an era where AI tools like LLMs continue to evolve, making it increasingly difficult to hide online. As the researchers note, while we
Verticals
tech
Originally published on The Register on 2/26/2026