LLM News

Every LLM release, update, and milestone.

Filtered by:bias✕ clear
research

Researchers expose 'preference leakage' bias in LLM judging systems

Researchers have identified a contamination problem called preference leakage in LLM-as-a-judge evaluation systems, where judges systematically favor data generated by related models. The bias occurs when the judge LLM is the same as the generator, inherits from it, or belongs to the same model family—making it harder to detect than previous LLM evaluation biases.

researchApple

Apple Intelligence generates stereotyped summaries across hundreds of millions of devices

Apple Intelligence, which automatically summarizes notifications and messages on hundreds of millions of devices, systematically generates stereotyped and hallucinated content according to an independent AI Forensics investigation. The analysis of over 10,000 AI-generated summaries reveals bias baked into the feature that pushes problematic assumptions to users unprompted.

2 min readvia the-decoder.com