product updateOpenAI

OpenAI shifts Codex to usage-based pricing, offers $500 credits to enterprise customers

TL;DR

OpenAI is replacing per-seat licensing with usage-based pricing for Codex in ChatGPT Business and Enterprise plans, eliminating upfront license costs. Eligible Business customers can claim up to $500 in promotional credit per workspace. The shift targets enterprises where coding tools typically expand from individual developers to full teams, positioning OpenAI against GitHub Copilot and Cursor.

1 min read
0

OpenAI Shifts Codex to Usage-Based Pricing for Enterprise Plans

OpenAI is moving Codex to a usage-based pricing model across ChatGPT Business and Enterprise plans, removing per-seat licensing requirements and requiring payment only for actual usage.

The change applies to both ChatGPT Business and Enterprise tiers. Workspace administrators can enable Codex access across their organization and pay exclusively for consumption—no upfront license commitment required. As part of a limited-time promotion, eligible Business customers can claim up to $500 in promotional credit per workspace.

Strategic Rationale

OpenAI frames the shift as lowering adoption barriers for enterprises. "Coding tools typically spread from individual developers to full teams," the company stated. "This model gives organizations a simpler way to support that motion inside a managed workspace."

The pricing change directly targets GitHub Copilot and Cursor, both of which continue charging per-seat subscription fees. By eliminating upfront costs, OpenAI aims to reduce friction for teams evaluating coding assistants at scale.

Usage Context

OpenAI reports that over two million developers use Codex weekly. Business and Enterprise plan usage has grown sixfold since January 2026, according to the company.

The primary competitive threat comes from Anthropic's Claude Code offering, which OpenAI identified as its biggest rival in this category.

What This Means

OpenAI is applying a proven SaaS playbook: lower initial friction to drive adoption, then monetize based on actual consumption. For enterprises, this removes a key barrier—the inability to enable coding tools company-wide without pre-purchasing licenses. The $500 credit effectively subsidizes initial trial usage.

The move assumes lock-in increases with usage; developers embedded in a coding assistant tend to stick with it. However, success depends on Codex's quality relative to Copilot and Cursor. Pricing advantage alone won't retain customers if product quality lags.

Related Articles

product update

OpenAI embeds Codex plugin directly into Anthropic's Claude Code

OpenAI released a plugin that embeds its Codex coding assistant directly into Anthropic's Claude Code, the market-dominant code IDE. The plugin offers standard code review, adversarial review, and background task handoff capabilities, requiring only a ChatGPT subscription or OpenAI API key.

product update

ChatGPT now integrates with Apple CarPlay for hands-free conversation

OpenAI's ChatGPT is now available directly on Apple CarPlay, allowing drivers to conduct full voice conversations with the AI assistant while driving hands-free. The integration requires iOS 26.4, the latest ChatGPT app, and a compatible vehicle. Unlike Siri, ChatGPT cannot access device functions like email, messaging, or Maps, but provides information on complex topics Siri struggles with.

product update

Cursor 3 rebuilds IDE around parallel AI agent fleets, moves away from classic editor layout

Cursor released version 3 of its AI coding tool with a complete interface redesign built around running multiple AI agents in parallel rather than individual code editing. The new "agent-first" interface allows developers to launch agents from desktop, mobile, web, Slack, GitHub, and Linear, with seamless switching between cloud and local environments.

product update

Gemini 3.1 Pro launches in Augment Code at 2.6x cheaper than Claude Opus 4.6

Augment Code now offers Gemini 3.1 Pro alongside Claude Opus 4.6 and GPT-5.4. In head-to-head testing on structural refactoring tasks, Gemini matched or outperformed Opus while consuming 268 credits per task—46% cheaper than Opus's 488 credits—making it 2.6x more cost-effective per message in real-world usage.

Comments

Loading...