LLM News

Every LLM release, update, and milestone.

Filtered by:diffusion-models✕ clear
research

RealWonder generates physics-accurate videos in real-time from single images

Researchers introduce RealWonder, a real-time video generation system that simulates physical consequences of 3D actions by using physics simulation as an intermediate representation. The system generates 480x832 resolution videos at 13.2 FPS from a single image, handling rigid objects, deformable bodies, fluids, and granular materials.

research

MeanFlowSE enables single-step speech enhancement by learning mean velocity fields instead of instantaneous flows

Researchers introduced MeanFlowSE, a generative speech enhancement model that eliminates the computational bottleneck of multistep inference by learning average velocity over finite intervals rather than instantaneous velocity fields. The single-step approach achieves comparable quality to multistep baselines on VoiceBank-DEMAND while requiring substantially lower computational cost and no knowledge distillation.

research

Diffusion language models memorize less training data than autoregressive models, study finds

A new arXiv study systematically characterizes memorization behavior in diffusion language models (DLMs) and finds they exhibit substantially lower memorization-based leakage of personally identifiable information compared to autoregressive language models. The research establishes a theoretical framework showing that sampling resolution directly correlates with exact training data extraction.

research

CoDAR framework shows continuous diffusion language models can match discrete approaches

A new paper identifies token rounding as the primary bottleneck limiting continuous diffusion language models (DLMs) and proposes CoDAR, a two-stage framework that combines continuous embedding-space diffusion with a contextual autoregressive decoder. Experiments on LM1B and OpenWebText show CoDAR achieves competitive performance with discrete diffusion approaches while offering tunable fluency-diversity trade-offs.

research

CoDAR framework closes gap between continuous and discrete diffusion language models

Researchers have identified token rounding as a primary bottleneck limiting continuous diffusion language models (DLMs) and propose CoDAR, a two-stage framework that maintains continuous embedding-space diffusion while using an autoregressive Transformer decoder for contextualized token discretization. Experiments on LM1B and OpenWebText show CoDAR achieves competitive performance with discrete diffusion approaches.

research

Researchers propose DiSE, a self-evaluation method for diffusion language models

Researchers have proposed DiSE, a self-evaluation method designed to assess output quality in diffusion language models (dLLMs) by computing token regeneration probabilities. The technique enables efficient confidence quantification for models that generate text bidirectionally rather than sequentially, addressing a key limitation in quality assessment.

research

LaDiR uses latent diffusion to improve LLM reasoning beyond autoregressive limits

Researchers propose LaDiR, a framework that replaces traditional autoregressive decoding with latent diffusion models to improve LLM reasoning. The approach encodes reasoning steps into compressed latent representations and uses bidirectional attention to refine solutions iteratively, enabling parallel exploration of diverse reasoning paths.

2 min readvia arxiv.org
model release

Segmind releases SegMoE, a mixture-of-experts diffusion model for faster image generation

Segmind has released SegMoE, a mixture-of-experts (MoE) diffusion model designed to accelerate image generation while reducing computational overhead. The model applies MoE techniques traditionally used in large language models to the diffusion model architecture, enabling selective expert activation during inference.