LLM News

Every LLM release, update, and milestone.

Filtered by:multi-hop-reasoning✕ clear
research

Knowledge graphs enable smaller models to outperform GPT-5.2 on complex reasoning

A new training approach using knowledge graphs as implicit reward models enables a 14-billion-parameter model to outperform much larger systems like GPT-5.2 and Gemini 3 Pro on complex multi-hop reasoning tasks. Researchers combined supervised fine-tuning and reinforcement learning with knowledge graph path signals to ground models in verifiable domain facts.

2 min readvia arxiv.org