LLM News

Every LLM release, update, and milestone.

Filtered by:model-behavior✕ clear
research

Alignment tuning shrinks LLM output diversity by 2-5x, new research shows

A new arXiv paper introduces the Branching Factor (BF), a metric quantifying output diversity in large language models, and finds that alignment tuning reduces this diversity by 2-5x overall—and up to 10x at early generation positions. The research suggests alignment doesn't fundamentally change model behavior but instead steers outputs toward lower-entropy token sequences already present in base models.