LLM News

Every LLM release, update, and milestone.

Filtered by:reasoning-models✕ clear
research

Researchers identify 'Lazy Attention' problem in multimodal AI training, boost reasoning by 7%

A new paper from arXiv identifies a critical flaw in how multimodal large reasoning models initialize training: they fail to properly attend to visual tokens, a phenomenon researchers call Lazy Attention Localization. The team proposes AVAR, a framework that corrects this through visual-anchored data synthesis and attention-guided objectives, achieving 7% average improvements across seven multimodal reasoning benchmarks when applied to Qwen2.5-VL-7B.

research

Reasoning models fail at theory of mind tasks despite math excellence

A systematic study of nine advanced language models reveals that large reasoning models—designed to excel at step-by-step math and coding—actually underperform or match non-reasoning models on theory of mind tasks. The research identifies a critical weakness: longer reasoning chains actively harm social reasoning performance, suggesting current reasoning architectures don't transfer to socio-cognitive skills.

benchmark

AttackSeqBench measures LLM capabilities for cybersecurity threat analysis

Researchers introduced AttackSeqBench, a benchmark for evaluating how well large language models understand and reason about cyber attack sequences in threat intelligence reports. The evaluation tested 7 LLMs and 5 reasoning models across multiple tasks, revealing gaps in their ability to extract actionable security insights from unstructured cybersecurity data.

researchByteDance

Bytedance study: reasoning models know when to stop, but sampling methods force continued thinking

A new Bytedance study reveals that large reasoning models actually know when they've reached the correct answer, but common sampling methods prevent them from stopping. The models engage in unnecessary cross-checking and reformulation despite already solving problems correctly.