Breaking

Z.ai releases GLM-5V Turbo, native multimodal model for vision-based coding

Z.ai has released GLM-5V Turbo, a native multimodal foundation model designed for vision-based coding and agent-driven tasks. The model supports image, video, and text inputs with a 202,752 token context window, priced at $1.20 per million input tokens and $4 per million output tokens.

April 1, 2026

Latest News

All news →
0
researchGoogle DeepMind

Google Deepmind identifies six attack categories that can hijack autonomous AI agents

A Google Deepmind paper introduces the first systematic framework for 'AI agent traps'—attacks that exploit autonomous agents' vulnerabilities to external tools and internet access. The researchers identify six attack categories targeting perception, reasoning, memory, actions, multi-agent networks, and human supervisors, with proof-of-concept demonstrations for each.

0
model release

Holo3 achieves 78.85% on OSWorld benchmark with only 10B active parameters

H Company unveiled Holo3, a computer use model that scores 78.85% on the OSWorld-Verified benchmark—the highest on the leading desktop automation benchmark. The model achieves this with only 10B active parameters (122B total), positioning it as a lower-cost alternative to proprietary models like GPT 5.4 and Opus 4.6.

0
product updateAnthropic

Claude Code source leak reveals Anthropic working on 'Proactive' mode and autonomous payments

Anthropic's Claude Code version 2.1.88 release accidentally included a source map exposing over 512,000 lines of code and 2,000 TypeScript files. Analysis of the leaked codebase by security researchers reveals evidence of a planned 'Proactive' mode that would execute coding tasks without explicit user prompts, plus potential crypto-based autonomous payment systems.

0
product update

Elgato Stream Deck 7.4 adds MCP support, letting AI assistants control your buttons

Elgato has released Stream Deck 7.4 with Model Context Protocol (MCP) support, enabling AI assistants like Claude and ChatGPT to find and activate Stream Deck actions on your behalf. Users can now trigger macros and commands via natural language requests to their connected AI tools after enabling the feature in app preferences.

2 min readvia theverge.com
0
model release

UAE's TIIUAE releases Falcon Perception: 0.6B early-fusion model for open-vocabulary grounding

TIIUAE has released Falcon Perception, a 0.6B-parameter early-fusion Transformer that combines image patches and text in a single sequence for open-vocabulary object grounding and segmentation. The model achieves 68.0 Macro-F1 on SA-Co (vs. 62.3 for SAM 3) and introduces PBench, a diagnostic benchmark that isolates performance across five capability levels. TIIUAE also released Falcon OCR, a 0.3B model reaching 80.3 on olmOCR and 88.6 on OmniDocBench.

0
product updateAnthropic

Anthropic's Claude Code leak exposes Tamagotchi pet and always-on agent features

A source code leak in Anthropic's Claude Code 2.1.88 update exposed more than 512,000 lines of TypeScript, revealing unreleased features including a Tamagotchi-like pet interface and a KAIROS feature for background agent automation. Anthropic confirmed the leak was caused by a packaging error, not a security breach, and has since fixed the issue.

2 min readvia theverge.com
0
product updateAmazon Web Services

Amazon Bedrock AgentCore Evaluations now generally available for testing AI agents

Amazon Bedrock AgentCore Evaluations, a fully managed service for assessing AI agent performance, is now generally available following its public preview debut at AWS re:Invent 2025. The service addresses the core challenge that LLMs are non-deterministic—the same user query can produce different tool selections and outputs across runs—making traditional single-pass testing inadequate for reliable agent deployment.

3 min readvia aws.amazon.com
0
model releasexAI

xAI releases Grok 4.20 Multi-Agent with 2M context window and parallel agent reasoning

xAI has released Grok 4.20 Multi-Agent, a variant designed for collaborative agent-based workflows with a 2-million-token context window. The model scales from 4 agents at low/medium reasoning effort to 16 agents at high/xhigh effort levels, priced at $2 per million input tokens and $6 per million output tokens.

0
model releasexAI

xAI releases Grok 4.20 with 2M context window and native reasoning capabilities

xAI released Grok 4.20 on March 31, 2026, its flagship model featuring a 2 million token context window, $2 per million input tokens and $6 per million output tokens pricing, and toggleable reasoning capabilities. The model includes web search functionality at $5 per 1,000 queries and claims industry-leading speed with low hallucination rates.

2 min readvia openrouter.ai

Latest Models

All →