LLM News

Every LLM release, update, and milestone.

Filtered by:LLM-safety✕ clear
research

Researchers detect hallucinations in LLMs through computational traces

Researchers at Sapienza University of Rome have identified measurable computational traces that appear when large language models hallucinate. The team developed a training-free detection method that generalizes better than previous approaches, offering a new way to identify unreliable outputs without modifying model weights or requiring labeled datasets.