memory-optimization
2 articles tagged with memory-optimization
Gemma 4, DeepSeek V4, and ZAYA1 Deploy KV Cache Compression to Cut Long-Context Memory Costs
Recent open-weight LLM releases from Google, DeepSeek, and others are adopting architectural techniques that reduce KV cache size by approximately 50% at long contexts. These include cross-layer KV sharing in Gemma 4, which saves 2.7 GB at 128K context for the E2B model, and compressed convolutional attention in ZAYA1-8B.
Google's TurboQuant compresses AI memory use by 6x, but won't ease DRAM shortage
Google has unveiled TurboQuant, a KV cache quantization technology that claims to reduce memory consumption during AI inference by up to 6x by compressing data from 16-bit precision to as low as 2.5 bits. While the compression technique delivers meaningful efficiency gains for inference providers, it is unlikely to resolve the DRAM shortage that has driven memory prices to record highs, as expanding context windows offset memory savings.