New benchmark reveals LLMs struggle with genuine knowledge discovery in biology
Researchers have introduced DBench-Bio, a dynamic benchmark that addresses a fundamental problem: existing AI evaluations use static datasets that models likely encountered during training. The new framework uses a three-stage pipeline to generate monthly-updated questions from recent biomedical papers, testing whether leading LLMs can actually discover new knowledge rather than regurgitate training data.