LLM News

Every LLM release, update, and milestone.

Filtered by:reasoning-stability✕ clear
research

Researchers Identify 'Contextual Inertia' Bug in LLMs During Multi-Turn Conversations

Researchers have identified a critical failure mode in large language models called 'contextual inertia'—where models ignore new information in multi-turn conversations and rigidly stick to previous reasoning. A new training method called RLSTA uses single-turn performance as an anchor to stabilize multi-turn reasoning and recover performance lost to this phenomenon.