model releaseDeepSeek

DeepSeek releases V4 preview, claims parity with GPT-4o and Claude 3.5 Sonnet

TL;DR

DeepSeek released a preview of its V4 model on April 24, 2026, claiming the open-source system matches leading closed-source models from Anthropic, Google, and OpenAI. The company emphasized improved coding capabilities and compatibility with domestic Huawei chips, but did not disclose training costs or hardware specifications.

2 min read
0

DeepSeek V4 Preview Released with Claimed Parity to Leading US Models

Chinese AI company DeepSeek released a preview of its V4 model on April 24, 2026, claiming the open-source system can compete with leading closed-source models from Anthropic, Google, and OpenAI.

Key Details

DeepSeek did not disclose:

  • Training costs for V4
  • Hardware used for training
  • Specific benchmark scores
  • Context window size
  • Pricing information

The company emphasized that V4 represents a "major improvement" over prior models, particularly in coding capabilities. DeepSeek explicitly highlighted compatibility with domestic Huawei technology, marking what the company describes as a milestone for China's chip industry.

Context and Controversy

The V4 preview arrives approximately one year after DeepSeek's R1 model disrupted the US AI industry. DeepSeek claimed R1 was trained at a fraction of the cost of leading American systems, though specific cost figures were never independently verified.

US officials have accused DeepSeek of using banned Nvidia chips for training. Separately, Anthropic has claimed DeepSeek misused Claude to improve its own products, though details of these allegations remain unclear.

Coding Focus

According to DeepSeek, V4's enhanced coding performance targets capabilities that have become central to AI agents and driven adoption of systems like ChatGPT Codex and Claude Code. The company did not provide specific benchmarks comparing V4's coding performance to competitors.

What This Means

DeepSeek's V4 preview continues the pattern established with R1: bold claims about competitive performance without disclosed training costs, hardware specifications, or independent benchmark verification. The emphasis on Huawei chip compatibility suggests China's AI industry is working to reduce dependence on restricted Western semiconductor technology, though the practical performance implications remain unclear. Until DeepSeek releases concrete benchmarks and technical details, the actual capabilities of V4 relative to GPT-4o, Gemini, and Claude 3.5 Sonnet cannot be independently assessed.

Related Articles

model release

DeepSeek V4 Pro launches with 1.6 trillion parameters, 1M token context at $0.145 per million input tokens

Chinese AI lab DeepSeek has released preview versions of DeepSeek V4 Flash and V4 Pro, mixture-of-experts models with 1 million token context windows. The V4 Pro has 1.6 trillion total parameters (49 billion active), making it the largest open-weight model available, while both models significantly undercut frontier model pricing.

model release

DeepSeek releases V4 model preview with agent optimization, pricing undisclosed

DeepSeek released a preview of its V4 large language model on April 24, 2026, available in 'pro' and 'flash' versions. The Hangzhou-based company claims the open-source model achieves strong performance on agent-based tasks and has been optimized for tools like Anthropic's Claude Code and OpenClaw.

model release

DeepSeek V4 cuts inference costs with 1.6T parameter model using 13.7x less memory than V3

DeepSeek released V4 in two versions: a 284 billion parameter Flash model and a 1.6 trillion parameter Pro model with 49 billion active parameters. According to DeepSeek, the models use 9.5x-13.7x less memory than V3 through compressed attention mechanisms and FP4/FP8 mixed precision, while supporting a 1 million token context window.

model release

DeepSeek Releases V4-Flash-Base: 292B Parameter Base Model

DeepSeek has released V4-Flash-Base, a 292 billion parameter base model now available on Hugging Face. The model uses BF16, I64, F32, and F8_E4M3 tensor types and is distributed in Safetensors format.

Comments

Loading...