Study shows LLMs can fact-check using internal knowledge without external retrieval
A new arXiv paper challenges the dominant retrieval-based fact-checking approach by demonstrating that LLMs can verify factual claims using only their parametric knowledge. The study introduces INTRA, a method leveraging internal model representations that outperforms logit-based approaches and shows robust generalization across long-tail knowledge, multilingual claims, and long-form generation.