London

June 2–3, 2026

New York

September 15–16, 2026

Berlin

November 9–10, 2026

How Anthropic’s silence fueled a Claude Code trust crisis

Complaints left the tech giant scrambling for answers.
April 28, 2026

You have 1 article left to read this month before you need to register a free LeadDev.com account.

Estimated reading time: 4 minutes

Key takeaways:

  • Anthropic silently degraded Claude Code for seven weeks before explaining why. Three engineering missteps quietly reduced quality while users assumed the problem was their own.
  • Silence doesn’t contain speculation, it amplifies it.
  • Postmortems are necessary but not sufficient: developers need real-time signal during an incident, not a detailed explanation several weeks later.

If you’ve been grumbling that Claude Code hasn’t been pulling its weight lately, you aren’t imagining it.

Anthropic has confirmed that a string of engineering missteps led to a noticeable drop in performance in its Claude Code tool. It’s a slide that has frustrated users and sparked a backlash over the past month.

In a blog post, the company said it reviewed user complaints about the quality of Claude Code and identified three issues likely contributing to a poorer user experience.

  • On March 4, the default reasoning effort was reduced from “high” to “medium” to lower latency, a tradeoff Anthropic later acknowledged was a mistake. 
  • On March 26, a bug caused the model to repeatedly discard its reasoning history mid-session, making it appear forgetful and consuming users’ usage limits faster than expected. 
  • On April 16, a system prompt capped responses at 25 words between tool calls, which measurably reduced coding quality before being reverted four days later.

“We take reports about degradation very seriously. We never intentionally degrade our models,” the post read. Anthropic stated that the underlying model was not affected and the issues were tweaks made at the product level.

The firm said the issues were fixed as of April 20, and that it had taken steps to prevent similar problems from occurring again.

“Publishing the postmortem is a constructive step. The timing is worth noting though: the first issue shipped in early March, and the postmortem appeared roughly seven weeks later, only days after the final fix went out,” says Rajesh Jethwa, CTO of software engineering consultancy Digiterre.

Silence is deafening

Across online platforms like Reddit, X, and GitHub, users described a noticeable decline in Claude’s performance.

They claimed the model increasingly fails to follow instructions, sometimes opts for inappropriate shortcuts, and makes more mistakes on complex workflows that it previously handled well. 

One widely circulated GitHub analysis, attributed to Stella Laurenzo, a senior director of AI at AMD, argued that the company has fundamentally changed how Claude Code works. According to her findings, Claude shifted from a “research-first” approach (reading files and gathering context before acting) to a faster but riskier “edit-first” style.

That change has made the system far more error-prone, she wrote. “Claude has regressed to the point [that] it cannot be trusted to perform complex engineering,” according to Laurenzo, citing an analysis of 6,852 Claude Code session files, 17,871 thinking blocks, and 234,760 tool calls. 

Laurenzo was far from the only person having issues with the tool. Several others said they canceled subscriptions. Meanwhile, cybersecurity professionals have warned of potentially dangerously degraded code quality.

Anthropic’s explanation came only after weeks of user complaints that Claude Code’s responses had noticeably degraded, with the company largely silent during that period.

“The frustrating part is that the Claude Code team, along with people deep in AI psychosis, have been gaslighting anyone who raises concerns about Claude Code’s recent issues,” Muratcan Koylan, a member of the technical staff at Sully.ai, said in a post on X.

“To their credit, Anthropic has leaned into this by creating a dedicated @ClaudeDevs presence on X, sharing implementation detail and outlining corrective actions. That kind of engagement helps, but it works best when it’s proactive rather than reactive,” Jethwa said.

Kris Kang, chief product officer at Jetbrains, echoes similar sentiments. “When changes are not communicated, developers are left guessing whether the problem is their code, their prompt, the model, or the toolchain. That erodes trust and confidence quickly,” he says.

The developer community reacts swiftly to perceived trust and reliability issues, and the signal travels fast through GitHub threads, X, Reddit, and Discord communities, adds Jethwa. “Silence in that environment is rarely interpreted charitably; it tends to amplify speculation rather than contain it.”

Need for explanation 

Anthropic’s limited communication has weakened trust, and developers rely on consistent communication.

“Developers do not need every internal detail, but they do need visibility into changes that materially affect how they work. If a model, assistant, or coding agent changes in a way that affects output quality, latency, context handling, or task success, companies should explain what changed, what users should expect, and how they can adapt,” Kang says. 

There’s a clear obligation on providers to communicate with all users – especially when changes are unexpected, or when planned updates have enough unintended impact that they must be rolled back, Jethwa says. 

“Developers and enterprises increasingly build workflows on top of these tools, so behavioral shifts cascade into downstream systems, developer experience, and delivery commitments. The bar should include what changed, when it changed, who was affected, and what is being done to prevent recurrence,” he adds. 

Anthropic’s postmortem covers each of these, including specific commitments around tighter system prompt controls, broader eval coverage, soak periods, and gradual rollouts. That level of specificity is the standard the industry should be moving toward.

“That said, postmortems address the past,” Jethwa says. “What developers value most is real-time signal during an incident: status page updates, in-product notices, and acknowledgement that something is being investigated, even before the root cause is known.”