LLM News

Every LLM release, update, and milestone.

0
research

Alibaba's HopChain framework fixes vision model failures in multi-step reasoning tasks

Researchers from Alibaba's Qwen team and Tsinghua University developed HopChain, a framework that automatically generates multi-step image questions to fix how vision-language models fail during complex reasoning tasks. The method improved 20 out of 24 tested benchmarks by forcing models to re-examine images at each reasoning step, preventing early perceptual errors from cascading through subsequent steps.

0
research

AI offensive cyber capabilities doubling every 5.7 months since 2024, study finds

AI offensive cybersecurity capabilities are accelerating faster than previously measured. Lyptus Research's new study finds the doubling time has compressed from 9.8 months (since 2019) to 5.7 months (since 2024), with GPT-5.3 Codex and Opus 4.6 now solving tasks at 50% success rates that would take human security experts three hours.

0
research

Google study: AI benchmarks need 10+ human raters per example, not standard 3-5

A Google Research and Rochester Institute of Technology study reveals that standard AI benchmarking practices using three to five human evaluators per test example systematically underestimate human disagreement and produce unreliable model comparisons. The researchers found that at least ten raters per example are needed for statistically reliable results, and that budget allocation between test examples and raters matters as much as total budget size.

0
research

Alibaba's Qwen team develops algorithm that doubles reasoning chain length in math problems

Alibaba's Qwen team has developed Future-KL Influenced Policy Optimization (FIPO), a training algorithm that assigns different weights to tokens based on their influence on subsequent reasoning steps, rather than treating all tokens equally. Testing on Qwen2.5-32B-Base showed reasoning chains double from ~4,000 to 10,000+ tokens, with AIME 2024 accuracy improving from 50% to 58%, outperforming Deepseek-R1-Zero-Math-32B (47%) and OpenAI's o1-mini (56%). The team plans to open-source the system.

0
changelogAnthropic

Anthropic charges Claude Code subscribers extra for OpenClaw usage starting today

Anthropic is enforcing separate billing for Claude Code subscribers using third-party tools like OpenClaw, starting April 4, 2026. Subscribers can no longer use their subscription limits for these integrations and must pay through a new pay-as-you-go model. The decision follows OpenClaw creator Peter Steinberger's move to OpenAI.

2 min readvia techcrunch.com
0
model releaseTencent

Tencent releases OmniWeaving, open-source video generation model with reasoning and multi-modal composition

Tencent's Hunyuan team released OmniWeaving on April 3, 2026, an open-source video generation model designed to compete with proprietary systems like Seedance-2.0. The model combines multimodal composition, reasoning-informed capabilities, and supports eight video generation tasks including text-to-video, image-to-video, video editing, and compositional generation.

0
model release

PrismML releases 1-bit Bonsai 8B model, claims 14x smaller and 5x more energy efficient than full-precision peers

PrismML, a Caltech-founded startup, has released Bonsai 8B, a 1-bit quantized large language model that the company claims is 14x smaller and 5x more energy efficient than full-precision counterparts while remaining competitive with standard 8B models. The model fits into 1.15GB of memory and uses a novel 1-bit weight representation (binary signs with shared scale factors per weight group) instead of traditional 16-bit or 32-bit precision.

0
model releaseGoogle DeepMind

NVIDIA releases Gemma 4 31B quantized model with 256K context, multimodal capabilities

NVIDIA has released a quantized version of Google DeepMind's Gemma 4 31B IT model, compressed to NVFP4 format for efficient inference on consumer GPUs. The 30.7B-parameter multimodal model supports 256K token context windows, handles text and image inputs with video frame processing, and maintains near-baseline performance across reasoning and coding benchmarks.

2 min readvia huggingface.co
0
model releaseGoogle DeepMind

Google DeepMind releases Gemma 4 with multimodal reasoning and up to 256K context window

Google DeepMind released Gemma 4, a multimodal model family supporting text, images, video, and audio with context windows up to 256K tokens. The release includes four sizes (E2B, E4B, 26B A4B, and 31B) designed for deployment from mobile devices to servers. The 31B dense model achieves 85.2% on MMLU Pro and 89.2% on AIME 2026.

3 min readvia huggingface.co
1
product updateAnthropic

Anthropic blocks Claude subscriptions from OpenClaw access, requires separate pay-as-you-go billing

Anthropic is effectively blocking Claude subscription access to third-party tools like OpenClaw starting April 4, 2026 at 3PM ET. Users will need to purchase separate pay-as-you-go usage bundles to continue using OpenClaw with Claude. The move comes as OpenClaw's popularity has strained Anthropic's infrastructure capacity.

2 min readvia theverge.com
1
model releaseDeepSeek

Deepseek v4 launching on Huawei chips exclusively, signaling China's AI independence progress

Deepseek v4 is launching in the coming weeks running exclusively on Huawei chips, marking a major milestone in China's effort to reduce dependency on foreign semiconductors. Chinese tech giants including Alibaba, Bytedance, and Tencent have ordered hundreds of thousands of Huawei Ascend 950PR units to deploy the model through their cloud services.

2 min readvia the-decoder.com
1
analysis

Gemma 4 success hinges on tooling and fine-tuning ease, not benchmark scores

Google's Gemma 4 release marks a shift in open model strategy with Apache 2.0 licensing and competitive benchmarks, but real success depends on factors rarely measured: tooling stability, fine-tuning ease, and ecosystem adoption. The open model landscape is now crowded with alternatives like Qwen 3.5, Nemotron 3, and others—a maturation that changes what separates winners from the field.

0
product updateAnthropic

Anthropic attributes Claude Code usage drain to peak-hour caps and large context windows

Anthropic has identified two primary causes for Claude Code users hitting usage limits faster than expected: stricter rate limiting during peak hours and sessions with context windows exceeding 1 million tokens. The company also recommends switching to Sonnet 4.6 instead of Opus, which consumes limits roughly twice as fast.

0
product updateOpenAI

OpenAI shifts Codex to usage-based pricing, offers $500 credits to enterprise customers

OpenAI is replacing per-seat licensing with usage-based pricing for Codex in ChatGPT Business and Enterprise plans, eliminating upfront license costs. Eligible Business customers can claim up to $500 in promotional credit per workspace. The shift targets enterprises where coding tools typically expand from individual developers to full teams, positioning OpenAI against GitHub Copilot and Cursor.

1 min readvia the-decoder.com
1
product updateOpenAI

ChatGPT now integrates with Apple CarPlay for hands-free conversation

OpenAI's ChatGPT is now available directly on Apple CarPlay, allowing drivers to conduct full voice conversations with the AI assistant while driving hands-free. The integration requires iOS 26.4, the latest ChatGPT app, and a compatible vehicle. Unlike Siri, ChatGPT cannot access device functions like email, messaging, or Maps, but provides information on complex topics Siri struggles with.

2 min readvia zdnet.com
0
model releaseZhipu AI

Zhipu AI releases GLM-5V-Turbo: multimodal model generates front-end code from design mockups

Zhipu AI released GLM-5V-Turbo, a multimodal coding model that converts design mockups directly into executable front-end code. The model processes images, video, and text with a 200,000-token context window and 128,000-token max output, priced at $1.20 per million input tokens and $4 per million output tokens.

1
product update

Cursor 3 rebuilds IDE around parallel AI agent fleets, moves away from classic editor layout

Cursor released version 3 of its AI coding tool with a complete interface redesign built around running multiple AI agents in parallel rather than individual code editing. The new "agent-first" interface allows developers to launch agents from desktop, mobile, web, Slack, GitHub, and Linear, with seamless switching between cloud and local environments.

2 min readvia the-decoder.com