Anthropic is shifting its pricing for power users from flat fees to a usage-based model, potentially tripling costs for some enterprise customers even as users report a decline in the AI model's performance.
The company’s coding-focused AI tools, particularly Claude Code, have seen a surge in adoption, putting pressure on Anthropic's compute resources and profitability goals, according to The Information. This change comes amidst a growing backlash from developers who claim Anthropic's models have degraded in quality. The move marks a significant change for enterprise clients relying on Anthropic's AI.
Previously, Anthropic charged up to $200 per user per month for its Claude Enterprise subscriptions. The new structure introduces a $20 monthly fee per user, supplemented by charges based on actual computing usage.
For organizations with high-intensity AI workflows, this could lead to substantially higher expenses, especially for tools like Claude Code and Claude Cowork, which are compute-intensive.
Why Anthropic Is Changing Its Pricing Model
Anthropic's shift to usage-based pricing directly addresses the escalating costs associated with its popular AI tools.The company saw weekly active users of Claude Code double between January and February alone, highlighting a rapid growth in demand. Such growth strains existing infrastructure and pushes up operational expenses. Fredrik Filipsson, co-founder of Redress Compliance, a firm specializing in software licensing, estimates that this new pricing scheme could lead to a three-fold increase in costs for some enterprise clients.
Anthropic attributes the change to a desire for fairer billing, arguing that the previous flat-fee model either led to users hitting capacity limits or paying for unused resources. A spokesperson told The Information that the updated structure is intended to align costs more closely with actual consumption.
This reflects a broader industry trend where AI companies, facing immense compute demands, are refining their monetization strategies to ensure long-term profitability.
Are Anthropic's AI Models Performing Worse?
The pricing overhaul coincides with widespread complaints from users regarding the perceived degradation of Anthropic's AI models, particularly Claude Code.These concerns gained significant traction after a senior director at AMD, Stella Laurenzo, posted on GitHub in February, stating that Claude Code was no longer reliable for complex engineering tasks.
She observed a decline in performance compared to January, citing instances where the model ignored instructions and provided "simplest fixes" that were incorrect. Social media platforms, including X, quickly amplified these complaints, with users sharing screenshots and speculating that Anthropic had "nerfed" its models.
One popular theory suggested Anthropic secretly adjusted Claude's default "effort" level from "high" to "medium" when editing code, without proper disclosure.
Boris Cherny, head of Claude Code at Anthropic, directly addressed these allegations on X, calling them "false." He clarified that while the default effort level was indeed set to "medium," this was a direct response to user feedback about excessive token consumption.
“We defaulted to medium as a result of user feedback about Claude using too many tokens. When we made the change, we (1) included it in the changelog and (2) showed a dialog when you opened Claude Code so you could choose to opt out. Literally nothing sneaky about it — this was us addressing user feedback in an obvious and explicit way.”
— Boris Cherny, Head of Claude Code, Anthropic
Cherny's response confirms a configuration change but refutes any intentional performance degradation or lack of transparency. The company stated these adjustments were made to optimize efficiency based on user input, rather than to conserve compute for new models like the more powerful Mythos.
This ongoing tension between perceived quality and the financial realities of running advanced AI models highlights a critical challenge for the entire AI industry as it scales.








