Researchers link pseudonymous users to real identities using AI for under $10 per person
Researchers from ETH Zurich and Anthropic have demonstrated that pseudonymous internet users can be de-anonymized using commercially available AI models at a cost of just a few dollars per person. The attack works in minutes and calls fundamental assumptions about online anonymity into question.
Researchers Demonstrate Cheap, Fast De-anonymization of Pseudonymous Users
Researchers from ETH Zurich and Anthropic have published findings showing that pseudonymous internet users can be reliably identified and linked to their real identities using commercially available AI models—a process that costs just a few dollars per person and takes minutes.
The attack exploits the writing patterns and behavioral traces left by users across multiple pseudonymous accounts. By feeding these writing samples into large language models via commercial APIs, researchers were able to match pseudonymous identities to real people with sufficient accuracy to raise serious questions about the viability of online anonymity as currently practiced.
Key Findings
The research demonstrates that:
- Cost efficiency: De-anonymization of a single user requires only a few dollars in API costs using existing commercial LLM services
- Speed: The identification process completes in minutes, not hours or days
- Accessibility: The attack requires no specialized tools or technical expertise beyond basic API access to commercial AI models
- Effectiveness: The researchers achieved meaningful success rates in linking pseudonymous accounts to real identities
Implications for Online Privacy
The findings suggest that users who assume they are anonymous online because they use pseudonymous accounts may be significantly overestimating their actual privacy protection. Writing style, vocabulary choices, topic preferences, and other linguistic markers can now be matched across accounts with high accuracy using standard machine learning techniques applied at scale.
This has particular implications for:
- Whistleblowers and activists in restrictive environments
- Journalists protecting sources
- Users in jurisdictions with speech restrictions
- Anyone relying on pseudonymity for safety or privacy
The research was conducted collaboratively between ETH Zurich (Swiss Federal Institute of Technology) and Anthropic, suggesting the findings have been validated by multiple institutions and are not based on proprietary or non-reproducible techniques.
What This Means
The accessibility and low cost of this attack vector means de-anonymization has moved from a theoretical threat to a practical vulnerability. Unlike previous de-anonymization research requiring significant technical sophistication or privileged access, this method only requires API access to commercial LLM services anyone can purchase. The implications extend beyond individual privacy to fundamental questions about whether pseudonymity can provide meaningful protection in an era of commodity AI-powered analysis. For anyone depending on online anonymity—from activists to vulnerable populations—these findings suggest relying on pseudonymity alone is no longer sufficient and additional technical protections are necessary.