TPSTokens Per Second
NewsModelsIDEsRankingsBenchmarksSecurity

Navigate

NewsModelsIDEsRankingsChangelogBenchmarksSecurityModel QuizWhat I MissedHistorySaved

Companies

AnthropicOpenAIDeepMindMeta AIDeepSeekxAIMistral AIPerplexity AI

Zhipu AI

Chinese AI lab behind GLM series (GLM-4, GLM-5), ChatGLM. Platform at bigmodel.cn

https://zhipuai.cn →

News

No articles yet.

Models

GLM-5V Turbo

Zhipu AI

active
Context203K
Input/1M$1.2

Apr 1, 2026

GLM 5 Turbo

Zhipu AI

active

Fast inference variant of GLM 5 optimized for agent-driven environments. Deeply optimized for real-world agent workflows involving long execution chains with improved complex reasoning.

Context203K
Input/1M$0.96

Mar 15, 2026

Open weights

GLM 5

Zhipu AI

active

Z.ai's flagship open-source foundation model engineered for complex systems design and long-horizon agent workflows. Delivers production-grade performance on large-scale programming tasks, rivaling leading closed-source models.

Context80K
Input/1M$0.72

Feb 11, 2026

Open weights

GLM-4.7

Zhipu AI

active

~400B open-weight reasoning model for math, multi-file engineering, and agentic tasks. 200K context.

Context200K
Input/1M$0.38

Dec 22, 2025

Open weights

GLM-4.6

Zhipu AI

active

355B MIT-licensed model with 200K context and strong coding capability.

Context200K
Input/1M$0.3

Sep 30, 2025

Open weights

GLM-4-9B

Zhipu AI

deprecated

Zhipu AI 9B open-source model supporting tool calling, web browsing, and a 1M long-context variant.

Context131K

Jun 5, 2024

Open weights
TPS

Tokens Per Second. The fastest LLM news on the internet — tracked automatically every 15 minutes.

Coverage

  • Latest News
  • Model Database
  • AI IDEs
  • Compare IDEs
  • Changelog
  • Benchmarks
  • Compare Models

Companies

  • Anthropic
  • OpenAI
  • Google DeepMind
  • Meta AI
  • DeepSeek
  • xAI
  • Mistral AI
  • Perplexity AI

Guides

  • All Rankings
  • Best by Use Case
  • Best Coding LLM
  • Best Cheap LLM
  • Best Reasoning LLM
  • Best AI IDE
  • Compare Models
  • Compare IDEs
  • About TPS
  • RSS Feed
  • Atom Feed

© 2026 TPS — Tokens Per Second.

The fastest LLM news. All signal, no noise.