LLM News

Every LLM release, update, and milestone.

Filtered by:catastrophic-forgetting✕ clear
research

Reinforcement fine-tuning preserves model knowledge better than supervised fine-tuning, study finds

A new study on Qwen2.5-VL reveals reinforcement fine-tuning (RFT) significantly outperforms supervised fine-tuning (SFT) at preserving a model's existing knowledge during post-training adaptation. While SFT enables faster task learning, it causes catastrophic forgetting; RFT learns more slowly but maintains prior knowledge by reinforcing samples naturally aligned with the base model's probability landscape.

research

Research shows many-shot in-context learning closes gap with dedicated fine-tuning

Researchers propose Many-Shot In-Context Fine-tuning (ManyICL), a method that enables moderately-sized LLMs like Mistral 7B and Llama-3 8B to match dedicated fine-tuning performance while handling multiple downstream tasks with a single model. The approach treats in-context examples as training targets rather than prompts, significantly reducing the performance gap with task-specific models.