LLM News

Every LLM release, update, and milestone.

Filtered by:privacy✕ clear
research

Diffusion language models memorize less training data than autoregressive models, study finds

A new arXiv study systematically characterizes memorization behavior in diffusion language models (DLMs) and finds they exhibit substantially lower memorization-based leakage of personally identifiable information compared to autoregressive language models. The research establishes a theoretical framework showing that sampling resolution directly correlates with exact training data extraction.