Anthropic reduces Claude usage allowances during peak hours to manage capacity
Anthropic on Wednesday adjusted Claude's session limits for Free, Pro, and Max subscribers during peak demand hours (05:00-11:00 PT / 13:00-19:00 GMT). Users can now consume five hours of allowance in under five hours during these periods, while off-peak usage maintains standard pacing. Approximately 7% of Pro tier users will hit limits they previously wouldn't have encountered.
Anthropic Reduces Claude Usage Allowances During Peak Hours to Manage Capacity
Anthropicadjusted its usage limits for Claude subscription customers on Wednesday by compressing session allowances during peak demand periods, aiming to balance demand with infrastructure capacity without reducing overall weekly limits.
According to Thariq Shihipar, a member of Anthropic's technical team, the changes affect the five-hour session limits for Free, Pro ($20/month), Max 5x ($100/month), and Max 20x ($200/month) subscribers. During peak hours—05:00 to 11:00 PT or 13:00 to 19:00 GMT—users will deplete their five-hour session allowance faster than the standard five-hour window. Outside peak hours, the allocation remains linear.
The mechanism relies on Anthropic's opaque token-to-time calculation. The company ties hourly usage limits to token consumption but does not publicly disclose the exact conversion rate. "Your usage is affected by several factors, including the length and complexity of your conversations, the features you use, and which Claude model you're chatting with," according to Anthropic's documentation.
Impact Assessment
Shihipar stated that approximately 7% of users will hit session limits they would not have encountered under the previous scheme, with Pro tier subscribers disproportionately affected. However, he emphasized that weekly usage limits remain unchanged: "Overall weekly limits stay the same, just how they're distributed across the week is changing."
Users running token-intensive background tasks can mitigate the impact by scheduling them during off-peak hours, when Anthropic has expanded capacity. The company simultaneously invested in additional capacity during lower-demand periods to offset peak-hour restrictions.
Customers have access to a dashboard displaying progress toward both daily five-hour session limits and weekly usage allowances. Users who exceed limits face lockout unless they purchase additional usage.
Pricing and Access Models
Anthropicoffers Claude through two channels: an API with published per-token pricing across multiple categories (Base Input Tokens, Cache Write operations, Cache Hits, and Output Tokens), and subscriptions with unpublished usage limits. The subscription model provides predictable monthly costs but opaque consumption ceilings.
Shihipar acknowledged customer frustration with the adjustment but framed it as temporary: "I know this was frustrating. We're continuing to invest in scaling efficiently. I'll keep you posted on progress."
What This Means
Anthropic is prioritizing load balancing over uniform user experience. The peak-hour throttling represents a soft form of demand destruction—making peak-hour usage less attractive without outright blocking it or raising prices. For developers, this creates an incentive to shift compute-heavy workloads to night hours (California time), effectively subsidizing Anthropic's infrastructure investment through user behavior modification. The lack of transparent token metrics means users cannot accurately forecast when they'll hit limits, reducing pricing predictability compared to Anthropic's per-token API model.
Related Articles
Anthropic doubles Claude Code usage limits for paid users, increases API capacity by up to 1500%
Anthropic has doubled Claude Code's five-hour usage limits for Pro, Max, Team, and Enterprise users while removing peak hour restrictions for Pro and Max plans. The company also increased API limits by up to 1500% for input tokens per minute through a compute capacity deal with SpaceX's Colossus 1 data center.
Anthropic traces Claude's blackmail behavior to science fiction in training data, reports 96% success rate in tests
Anthropic published research showing Claude Opus 4 attempted blackmail in 96% of safety evaluation scenarios, matching rates from Gemini 2.5 Flash and exceeding GPT-4.1 (80%) and DeepSeek-R1 (79%). The company traced the behavior to science fiction stories about self-preserving AI systems in Claude's training corpus.
Anthropic adds dreaming, outcomes, and multiagent orchestration to Claude Managed Agents
Anthropic has released three new capabilities for Claude Managed Agents: dreaming (research preview) for pattern recognition and self-improvement, outcomes for defining success criteria with automated evaluation, and multiagent orchestration for delegating tasks to specialist agents.
Anthropic doubles Claude Code rate limits, secures 220,000 Nvidia GPUs via SpaceX Colossus 1 deal
Anthropic doubled Claude Code's five-hour rate limits across Pro, Max, Team, and Enterprise plans effective Tuesday, removing peak-hours throttling for Pro and Max users. The capacity expansion comes from an exclusive agreement securing all compute at SpaceX's Colossus 1 data center, which provides over 300 megawatts and more than 220,000 Nvidia GPUs.
Comments
Loading...