LLM News

Every LLM release, update, and milestone.

Filtered by:in-context-learning✕ clear
research

New method uses structural graphs to fix LLM reasoning collapse in multi-step theorem prediction

Researchers have identified and solved a critical scaling problem in LLM-based theorem prediction called Structural Drift, where in-context learning performance collapses as reasoning depth increases. Using Theorem Precedence Graphs to encode topological dependencies, they achieved 89.29% accuracy on the FormalGeo7k benchmark—matching state-of-the-art supervised approaches without any gradient-based training.

research

Researchers introduce RDB-PFN, first relational database foundation model trained entirely on synthetic data

Researchers have developed RDB-PFN, the first foundation model designed specifically for relational databases, trained entirely on synthetic data to overcome the scarcity of high-quality private databases. Pre-trained on over 2 million synthetic relational and single-table tasks, the model achieves few-shot performance on 19 real-world relational prediction tasks while outperforming existing graph-based and single-table baselines.

research

Research shows many-shot in-context learning closes gap with dedicated fine-tuning

Researchers propose Many-Shot In-Context Fine-tuning (ManyICL), a method that enables moderately-sized LLMs like Mistral 7B and Llama-3 8B to match dedicated fine-tuning performance while handling multiple downstream tasks with a single model. The approach treats in-context examples as training targets rather than prompts, significantly reducing the performance gap with task-specific models.