LLM News

Every LLM release, update, and milestone.

Filtered by:hallucination-mitigation✕ clear
research

Research proposes MoD-DPO to reduce cross-modal hallucinations in multimodal LLMs

Researchers have introduced Modality-Decoupled Direct Preference Optimization (MoD-DPO), a framework designed to reduce cross-modal hallucinations in omni-modal large language models. The method adds modality-aware regularization to enforce sensitivity to relevant modalities while reducing reliance on spurious correlations, showing consistent improvements across audiovisual benchmarks.

research

VC-STaR: Researchers use visual contrast to reduce hallucinations in VLM reasoning

Researchers propose Visual Contrastive Self-Taught Reasoner (VC-STaR), a self-improving framework that addresses a fundamental challenge in vision language models: hallucinations in visual reasoning. The approach uses contrastive VQA pairs—visually similar images with equivalent questions—to improve how VLMs identify relevant visual cues and generate more accurate reasoning paths.

research

Steer2Edit converts LLM steering vectors into targeted weight edits without retraining

Researchers propose Steer2Edit, a training-free framework that converts steering vectors into component-level weight edits targeting individual attention heads and MLP neurons. The method achieves up to 17.2% safety improvements, 9.8% gains in truthfulness, and 12.2% reduction in reasoning length while maintaining standard inference compatibility.