LLM News

Every LLM release, update, and milestone.

Filtered by:retrieval✕ clear
benchmark

AMA-Bench reveals major gaps in LLM agent memory systems with real-world evaluation

Researchers introduce AMA-Bench, a benchmark for evaluating long-horizon memory in LLM-based autonomous agents using real-world trajectories and synthetic scaling. Existing memory systems underperform due to lack of causality and reliance on lossy similarity-based retrieval. The proposed AMA-Agent system with causality graphs and tool-augmented retrieval achieves 57.22% accuracy, outperforming baselines by 11.16 percentage points.

2 min readvia arxiv.org
research

Researchers introduce Super Research benchmark for complex multi-step LLM reasoning

Researchers have introduced Super Research, a benchmark designed to evaluate how well large language models can handle highly complex questions requiring long-horizon planning, massive evidence gathering, and synthesis across heterogeneous sources. The benchmark consists of 300 expert-written questions across diverse domains, each requiring up to 100+ retrieval steps and reconciliation of conflicting evidence across 1,000+ web pages.

2 min readvia arxiv.org