LLM News

Every LLM release, update, and milestone.

Filtered by:sparsity✕ clear
research

1.58-bit BitNet models naturally support structured sparsity with minimal accuracy loss

Researchers have demonstrated that 1.58-bit quantized language models are naturally more compatible with semi-structured N:M sparsity than full-precision models. The Sparse-BitNet framework combines both techniques simultaneously, achieving up to 1.30X speedups in training and inference while maintaining smaller accuracy degradation than full-precision baselines at equivalent sparsity levels.

2 min readvia arxiv.org