model releaseOpenAI

OpenAI internal memo reveals 'Spud' model, 'Frontier' agent platform, and accuses Anthropic of $8B revenue inflation

TL;DR

An internal memo from OpenAI Chief Revenue Officer Denise Dresser outlines a new model codenamed 'Spud' for improved reasoning and production reliability, an agent platform called 'Frontier,' and expanded Amazon partnership. The memo also accuses Anthropic of inflating its $30B run rate by roughly $8B through gross revenue accounting.

3 min read
0

OpenAI internal memo reveals 'Spud' model, 'Frontier' agent platform, and accuses Anthropic of $8B revenue inflation

An internal memo from OpenAI Chief Revenue Officer Denise Dresser, leaked by The Verge, outlines the company's Q2 strategic direction including a new model codenamed "Spud," an agent platform called "Frontier," and direct accusations that Anthropic inflates its revenue figures by approximately $8 billion.

New 'Spud' model promises stronger reasoning

According to Dresser's memo, "Spud" represents "an important step in the intelligence foundation for the next generation of work." Early customer feedback claims the model delivers stronger reasoning capabilities, better understanding of intentions and dependencies, and more reliable production results compared to current OpenAI models.

Dresser claims Spud will make all of OpenAI's core products "significantly better" as part of an iterative deployment strategy. Specific benchmark scores, context window size, and pricing have not been disclosed.

'Frontier' agent platform targets enterprise deployment

The memo reveals OpenAI is building an agent platform called "Frontier," positioned as "the default platform for enterprise agents." According to Dresser, the enterprise AI market has shifted from simple prompts to autonomous agents that require orchestration, control, security, and governance.

Dresser argues that better models increase platform value, deeper integration raises switching costs, and every workflow running through the system makes OpenAI "harder to rip out." This represents OpenAI's strategic shift "from product vendor to operating infrastructure."

Amazon partnership expands beyond Microsoft

The memo describes demand for OpenAI's Amazon Bedrock partnership, announced in late February, as "frankly staggering." Dresser outlines an "Amazon Stateful Runtime Environment" that enables memory, context, and continuity across interactions beyond simple model access.

OpenAI is also building a service called "DeployCo" to function as a deployment engine alongside "Frontier Alliance" partners, addressing what Dresser calls the biggest bottleneck in enterprise AI: whether companies can roll it out at scale.

Direct attack on Anthropic's revenue and compute

The memo's sharpest section targets Anthropic directly. Dresser accuses the competitor of inflating its stated $30 billion run rate by approximately $8 billion by booking revenue share payments to Amazon and Google on a gross rather than net basis. OpenAI claims it reports its Microsoft revenue share on a net basis, "which is more inline [sic] with standards we would be held to as a public company."

Dresser also claims Anthropic's "strategic mistake" of not securing enough compute is visible in customer experience through throttling, spotty availability, and less reliable performance. She argues that OpenAI "recognized the exponential compute curve earlier and moved faster."

None of these financial or performance claims can be independently verified, as neither OpenAI nor Anthropic is publicly traded. The Information has previously reported on accounting differences between the two companies.

What this means

This leaked memo reveals OpenAI's strategic pivot from discrete products to integrated platform infrastructure, directly competing with Anthropic not just on model performance but on enterprise deployment capabilities. The unusually aggressive financial accusations suggest intensifying competition in the enterprise AI market, where both companies are racing to lock in long-term customer relationships. The 'Spud' model and 'Frontier' platform indicate OpenAI is positioning for autonomous agent workflows rather than chat-based interactions, though specific technical details remain undisclosed.

Related Articles

model release

OpenAI releases GPT-5.4-Cyber, a cybersecurity-focused model limited to verified security professionals

OpenAI has released GPT-5.4-Cyber, a fine-tuned variant of GPT-5.4 built for defensive cybersecurity work including binary reverse engineering. Access is initially restricted to a few hundred verified security professionals, with expansion planned to thousands of individuals and hundreds of teams in coming weeks.

model release

OpenAI releases GPT-5.4-Cyber, a fine-tuned variant for defensive cybersecurity work

OpenAI has released GPT-5.4-Cyber, a variant of GPT-5.4 fine-tuned specifically for defensive cybersecurity use cases. The release accompanies the company's Trusted Access for Cyber program, which allows users to verify their identity via government ID to gain access to cybersecurity-focused tools.

model release

OpenAI releases GPT-5.4-Cyber with tiered access verification system for cybersecurity work

OpenAI released GPT-5.4-Cyber, a model variant designed for defensive cybersecurity tasks with fewer restrictions on dual-use queries. Access is controlled through a tiered verification system in the Trusted Access for Cyber program, targeting thousands of vetted users compared to Anthropic's 40-organization Mythos Preview rollout.

benchmark

OpenAI GPT-5.4 Pro reportedly solves Erdős problem #1196 in 80 minutes, reveals novel mathematical connection

OpenAI's GPT-5.4 Pro model has reportedly solved Erdős open problem #1196 in approximately 80 minutes, with another 30 minutes to format the solution as a LaTeX paper. Mathematician Terence Tao notes the solution reveals a previously undescribed connection between integer anatomy and Markov process theory.

Comments

Loading...