LLM News

Every LLM release, update, and milestone.

Filtered by:mllm✕ clear
benchmark

OmniVideoBench: New 1,000-question benchmark exposes gaps in audio-visual AI reasoning

Researchers have introduced OmniVideoBench, a large-scale evaluation framework comprising 1,000 manually verified question-answer pairs derived from 628 videos (ranging from seconds to 30 minutes) designed to measure synergistic audio-visual reasoning in multimodal large language models. Testing reveals a significant performance gap between open-source and closed-source MLLMs on genuine cross-modal reasoning tasks.

research

Perception-R1 uses visual reward signals to improve multimodal AI reasoning

Researchers propose Perception-R1, a method that adds visual perception reward signals to reinforcement learning training for multimodal AI models. The approach achieves state-of-the-art results on multiple reasoning benchmarks using just 1,442 training examples by explicitly teaching models to accurately perceive visual content before reasoning about it.

research

MedXIAOHE: New medical vision-language model claims state-of-the-art performance on clinical benchmarks

Researchers have published MedXIAOHE, a medical multimodal foundation model designed for clinical applications. According to the authors, the model achieves state-of-the-art performance across diverse medical benchmarks and surpasses several closed-source multimodal systems on multiple capabilities.