Anthropic's Claude Code—once the undisputed heavyweight champion of AI coding assistants—is facing a severe crisis of confidence. According to a report by The Register, an official GitHub issue opened by AMD's AI group director outlines a steep, sudden drop in reasoning quality and reliability.
The degradation has become so pronounced that AMD's AI engineering team has officially abandoned the tool and switched providers. For a platform that largely built its reputation on complex engineering tasks, this erosion of enterprise trust raises critical questions about Anthropic's future trajectory.
The Data: Why Claude Got "Lazier"
Stella Laurenzo, director of AMD's AI group, brought receipts. In a detailed complaint on GitHub and a subsequent LinkedIn post, she concluded that Claude Code "cannot be trusted to perform complex engineering tasks" after months of heavy, consistent enterprise use.Her team didn't just guess—they analyzed 6,852 Claude Code sessions containing over 234,000 tool calls and nearly 18,000 thinking blocks. The resulting trends are alarming:
- Spiking "Laziness": "Stop-hook violations"—where the AI prematurely quits thinking or avoids complex work—went from zero before March 8th to 10 violations per day by late March.
- Plummeting Code Engagement: The average number of times Claude would cross-reference code before making changes plummeted from 6.6 reads down to just 2.
- Reckless Edits: Rather than making precise, targeted edits, the AI began lazily rewriting entire files from scratch.
Security Flaws and the "Dumbing Down" Effect
The reasoning collapse isn't Anthropic's only headache. Users had already started raising red flags in February, when a previous update intentionally truncated how the AI explained its reading process—sparking early fears that the model was being rapidly "dumbed down" for cost efficiency.If performance hits weren't enough, Anthropic has also faced fierce criticism over inexplicably massive token surges that have locked users out of the product entirely. And on top of it all, Claude Code recently suffered a source code leak.
Security firm Adversa AI immediately discovered a critical vulnerability following the leak. Attackers can completely bypass security checks if malicious commands are buried inside a 50+ subcommand pipeline, exposing users to unauthorized data exfiltration.
Laurenzo ultimately urged Anthropic to prioritize transparency and introduce a "max thinking tier," insisting enterprise engineers will gladly pay a premium for guaranteed deep reasoning. While her team has completely jumped ship to another provider, she left Anthropic with a stark warning: six months ago, Claude stood completely uncontested in reasoning quality. Today, they are actively handing their crown to the competition."Such leaks allow adversaries to build lookalikes that silently install malware or harvest credentials."
— Melissa Bischoping, Senior Director of Security Research at Tanium








