LLM News

Every LLM release, update, and milestone.

Filtered by:PEFT✕ clear
research

DiaBlo: Diagonal Block Fine-Tuning Matches Full Model Performance With Lower Cost

Researchers introduce DiaBlo, a parameter-efficient fine-tuning method that updates only diagonal blocks of model weight matrices instead of full parameters. The approach matches full-model fine-tuning performance across reasoning, code generation, and safety tasks while maintaining comparable memory usage and training speed to LoRA.