LLM News

Every LLM release, update, and milestone.

Filtered by:llm-research✕ clear
research

Study shows LLMs can fact-check using internal knowledge without external retrieval

A new arXiv paper challenges the dominant retrieval-based fact-checking approach by demonstrating that LLMs can verify factual claims using only their parametric knowledge. The study introduces INTRA, a method leveraging internal model representations that outperforms logit-based approaches and shows robust generalization across long-tail knowledge, multilingual claims, and long-form generation.

research

Researchers use LLMs to simulate misinformation susceptibility across demographics with 92% accuracy

Researchers have developed BeliefSim, a framework that uses Large Language Models to simulate how different demographic groups respond to misinformation by modeling their underlying beliefs. The approach achieved 92% accuracy in predicting susceptibility across multiple datasets and conditioning strategies.

research

Researchers map LLM reasoning as geometric flows in representation space

A new geometric framework models how large language models reason through embedding trajectories that evolve like physical flows. Researchers tested whether LLMs internalize logic beyond surface form by using identical logical propositions with varied semantic content, finding evidence that next-token prediction training leads models to encode logical invariants as higher-order geometry.

research

Neural Paging System Reduces LLM Context Management Complexity from O(N²) to O(N·K²)

A new research paper introduces Neural Paging, a hierarchical architecture that optimizes how LLMs manage their limited context windows by learning semantic caching policies. The approach reduces asymptotic complexity for long-horizon reasoning from O(N²) to O(N·K²) under bounded context window size K, addressing a fundamental bottleneck in deploying universal agents with external memory.

research

Research proposes MoD-DPO to reduce cross-modal hallucinations in multimodal LLMs

Researchers have introduced Modality-Decoupled Direct Preference Optimization (MoD-DPO), a framework designed to reduce cross-modal hallucinations in omni-modal large language models. The method adds modality-aware regularization to enforce sensitivity to relevant modalities while reducing reliance on spurious correlations, showing consistent improvements across audiovisual benchmarks.

research

Alignment tuning shrinks LLM output diversity by 2-5x, new research shows

A new arXiv paper introduces the Branching Factor (BF), a metric quantifying output diversity in large language models, and finds that alignment tuning reduces this diversity by 2-5x overall—and up to 10x at early generation positions. The research suggests alignment doesn't fundamentally change model behavior but instead steers outputs toward lower-entropy token sequences already present in base models.