inference

13 articles tagged with inference

May 4, 2026
product update

AWS SageMaker adds automatic instance fallback to prevent GPU capacity failures

Amazon SageMaker AI now supports capacity-aware instance pools that automatically try alternative GPU instance types when primary choices lack capacity. The feature works across endpoint creation, autoscaling, and scale-in operations, eliminating the manual retry cycles that previously left endpoints stuck in failed states.

April 4, 2026
model releaseGoogle DeepMind

NVIDIA releases Gemma 4 31B quantized model with 256K context, multimodal capabilities

NVIDIA has released a quantized version of Google DeepMind's Gemma 4 31B IT model, compressed to NVFP4 format for efficient inference on consumer GPUs. The 30.7B-parameter multimodal model supports 256K token context windows, handles text and image inputs with video frame processing, and maintains near-baseline performance across reasoning and coding benchmarks.

April 2, 2026
benchmarkNVIDIA

Nvidia claims 291 MLPerf wins with 288-GPU setup; AMD MI355X crosses 1M tokens/sec

MLCommons published MLPerf Inference v6.0 results on April 1, 2026, with Nvidia, AMD, and Intel each claiming top spots in different configurations. Nvidia's 288-GPU GB300-NVL72 system achieved 2.49 million tokens per second on DeepSeek-R1, while AMD's MI355X crossed one million tokens per second for the first time. Direct comparisons remain difficult as each chipmaker targets different market segments and benchmarks.

April 1, 2026
research

Google's TurboQuant compresses AI memory use by 6x, but won't ease DRAM shortage

Google has unveiled TurboQuant, a KV cache quantization technology that claims to reduce memory consumption during AI inference by up to 6x by compressing data from 16-bit precision to as low as 2.5 bits. While the compression technique delivers meaningful efficiency gains for inference providers, it is unlikely to resolve the DRAM shortage that has driven memory prices to record highs, as expanding context windows offset memory savings.

March 25, 2026
research

Google's TurboQuant cuts AI inference memory by 6x using lossless compression

Google Research unveiled TurboQuant, a lossless memory compression algorithm that reduces AI inference working memory (KV cache) by at least 6x without impacting model performance. The technology uses vector quantization methods called PolarQuant and an optimization technique called QJL. Findings will be presented at ICLR 2026.

March 23, 2026
product updateNVIDIA

NVIDIA Nemotron 3 Super now available on Amazon Bedrock with 256K context window

NVIDIA Nemotron 3 Super, a hybrid Mixture of Experts model with 120B parameters and 12B active parameters, is now available as a fully managed model on Amazon Bedrock. The model supports up to 256K token context length and claims 5x higher throughput efficiency over the previous Nemotron Super and 2x higher accuracy on reasoning tasks.

March 12, 2026
product update

Meta unveils four custom AI inference chips to cut costs and reduce Nvidia dependency

Meta has unveiled four generations of custom-designed AI chips focused on inference workloads, aiming to reduce inference costs across its platforms serving billions of users. The move represents a significant step toward reducing Meta's dependence on GPU manufacturers like Nvidia and AMD.

March 11, 2026
product update

Meta develops four custom AI chips to reduce Nvidia dependence

Meta has developed four new custom AI chips called MTIA (Meta Training and Inference Accelerator) processors designed to power its AI models and recommendation systems. The move represents the company's ongoing effort to reduce dependence on Nvidia's expensive processors while managing massive compute requirements.

March 9, 2026
product updateNVIDIA

NVIDIA Nemotron 3 Nano now available on Amazon Bedrock as serverless model

Amazon Bedrock now offers NVIDIA's Nemotron 3 Nano as a fully managed serverless model, expanding its Nemotron portfolio alongside previously available Nemotron 2 Nano 9B and Nemotron 2 Nano VL 12B variants. The addition enables developers to deploy NVIDIA's smallest inference-optimized model without managing infrastructure.

product updateAmazon Web Services

Anthropic Claude models now available in India via Amazon Bedrock with cross-region inference

Amazon Bedrock now enables access to Anthropic Claude models in India with global cross-region inference support. The service allows developers to build generative AI applications with Claude variants across AWS regions.

March 3, 2026
model release

Google releases Gemini 3.1 Flash-Lite, fastest model in 3 series

Google has released Gemini 3.1 Flash-Lite, positioning it as the fastest and most cost-efficient model in its Gemini 3 series. The release targets deployment scenarios requiring high-speed inference at reduced computational cost.

model release

Google releases Gemini 3.1 Flash-Lite, fastest model in 3 series

Google DeepMind has released Gemini 3.1 Flash-Lite, positioning it as the fastest and most cost-efficient model in the Gemini 3 series. The release targets applications requiring high-speed inference at scale, continuing Google's multi-tier model strategy across the Gemini family.

February 20, 2026
product update

Taalas serves Llama 3.1 8B at 17,000 tokens/second with custom silicon

Taalas, a new Canadian hardware startup, announced its first product: a custom silicon implementation of Meta's Llama 3.1 8B model running at 17,000 tokens/second. The startup uses aggressive quantization combining 3-bit and 6-bit parameters. The system is accessible via chatjimmy.ai.