LLM News

Every LLM release, update, and milestone.

Filtered by:parameter-efficient-finetuning✕ clear
research

DiaBlo: Diagonal Block Finetuning Matches Full Model Performance With Lower Cost

Researchers propose DiaBlo, a parameter-efficient finetuning (PEFT) method that updates only diagonal blocks of model weight matrices, achieving comparable performance to full-model finetuning while maintaining LoRA-level efficiency. The approach eliminates low-rank matrix dependencies and provides theoretical guarantees of convergence.