LLM News

Every LLM release, update, and milestone.

Filtered by:autoregressive-decoding✕ clear
research

CoDAR framework closes gap between continuous and discrete diffusion language models

Researchers have identified token rounding as a primary bottleneck limiting continuous diffusion language models (DLMs) and propose CoDAR, a two-stage framework that maintains continuous embedding-space diffusion while using an autoregressive Transformer decoder for contextualized token discretization. Experiments on LM1B and OpenWebText show CoDAR achieves competitive performance with discrete diffusion approaches.

research

LaDiR uses latent diffusion to improve LLM reasoning beyond autoregressive limits

Researchers propose LaDiR, a framework that replaces traditional autoregressive decoding with latent diffusion models to improve LLM reasoning. The approach encodes reasoning steps into compressed latent representations and uses bidirectional attention to refine solutions iteratively, enabling parallel exploration of diverse reasoning paths.

2 min readvia arxiv.org