LLM News

Every LLM release, update, and milestone.

Filtered by:visual-reasoning✕ clear
research

Researchers develop data synthesis method to improve multimodal AI reasoning on charts and documents

A new research paper proposes COGS (COmposition-Grounded data Synthesis), a framework that decomposes questions into primitive perception and reasoning factors to generate synthetic training data. The method substantially improves multimodal model performance on chart reasoning and document understanding tasks with minimal human annotation.

research

VC-STaR: Researchers use visual contrast to reduce hallucinations in VLM reasoning

Researchers propose Visual Contrastive Self-Taught Reasoner (VC-STaR), a self-improving framework that addresses a fundamental challenge in vision language models: hallucinations in visual reasoning. The approach uses contrastive VQA pairs—visually similar images with equivalent questions—to improve how VLMs identify relevant visual cues and generate more accurate reasoning paths.