product update

G42 and Cerebras to deploy 8 exaflops of compute infrastructure in India

TL;DR

Abu Dhabi-based G42 has partnered with U.S. chipmaker Cerebras to deploy 8 exaflops of computational capacity through a new system in India. The partnership represents a significant infrastructure expansion for AI training and inference workloads in South Asia.

1 min read
0

G42 and Cerebras Deploy 8 Exaflops in India

UAE-based technology company G42 has announced a partnership with American chipmaker Cerebras to build and operate an 8 exaflops compute system in India.

Partnership Details

The partnership combines G42's infrastructure and operational expertise with Cerebras' specialized AI accelerator technology. The deployment will establish significant compute capacity in India, supporting both training and inference workloads for large language models and other AI applications.

8 exaflops equals 8 quintillion floating-point operations per second—representing substantial compute resources. For context, this scale of infrastructure typically supports enterprise-grade AI model training and deployment at production scale.

Strategic Significance

The deployment in India reflects growing investment in AI infrastructure outside primary Western markets. India's growing AI research ecosystem, combined with lower operational costs compared to North America and Europe, makes it an attractive location for large-scale compute centers.

Cerebras, known for its wafer-scale AI processors, has been expanding its partnerships with cloud and infrastructure providers to deploy systems globally. G42, a diversified tech conglomerate with significant investment in AI and cloud computing, has been actively building infrastructure capacity across multiple regions.

What This Means

This infrastructure deployment signals continued regional expansion in AI compute capacity. With major models requiring increasingly large-scale training resources, partnerships between infrastructure providers (G42) and specialized chip makers (Cerebras) are becoming standard approaches to building competitive AI systems. The India location positions both companies to serve growing demand from South Asian enterprises and researchers, while potentially supporting AI workloads for the broader region.

Related Articles

product update

Anthropic silently tests 5x price increase for Claude Code, reverses within hours after backlash

Anthropic updated its pricing page on April 22, 2026, removing Claude Code from the $20/month Pro plan and restricting it to $100-200/month Max plans. The company reversed the change within hours after significant backlash across Reddit, Hacker News, and Twitter.

product update

Anthropic's Claude Cowork now runs on Amazon Bedrock with consumption-based pricing

Anthropic announced Claude Cowork is now available on Amazon Bedrock, allowing organizations to deploy the desktop AI assistant through their AWS infrastructure with consumption-based pricing. Unlike Claude Enterprise, pricing flows through existing AWS agreements with no per-seat licensing from Anthropic.

product update

OpenAI's ChatGPT Images 2.0 adds web search and multi-image generation with reasoning mode

OpenAI released ChatGPT Images 2.0, powered by the new GPT Image 2 model. The update enables web search integration for paid subscribers in thinking mode, generates up to eight images from a single prompt while maintaining visual consistency, and supports 2K resolution output.

product update

OpenRouter Launches Pareto Code Router with Dynamic Model Selection Based on Quality Threshold

OpenRouter has released Pareto Code Router, a dynamic routing system that automatically selects from a curated list of coding models based on a user-defined quality threshold. Users set a min_coding_score between 0 and 1, and the router selects an appropriate model from its shortlist without requiring commitment to a specific model.

Comments

Loading...