LLM News

Every LLM release, update, and milestone.

Filtered by:model-efficiency✕ clear
research

Knowledge graphs enable smaller models to outperform GPT-5.2 on complex reasoning

A new training approach using knowledge graphs as implicit reward models enables a 14-billion-parameter model to outperform much larger systems like GPT-5.2 and Gemini 3 Pro on complex multi-hop reasoning tasks. Researchers combined supervised fine-tuning and reinforcement learning with knowledge graph path signals to ground models in verifiable domain facts.

2 min readvia arxiv.org
model release

Segmind releases SegMoE, a mixture-of-experts diffusion model for faster image generation

Segmind has released SegMoE, a mixture-of-experts (MoE) diffusion model designed to accelerate image generation while reducing computational overhead. The model applies MoE techniques traditionally used in large language models to the diffusion model architecture, enabling selective expert activation during inference.