VentureBeat · Apr 23, 2026 9:50 PM

Mystery solved: Anthropic reveals changes to Claude's harnesses and operating instructions likely caused degradation

For several weeks, a growing chorus of developers and AI power users claimed that Anthropic’s flagship models were losing their edge. Users across GitHub, X, and Reddit reported a phenomenon they described as "AI shrinkflation"—a perceived degradation where Claude seemed less capable of sustained reasoning, more prone to hallucinations, and increasingly wasteful with tokens. Critics pointed to a measurable shift in behavior, alleging that the model had moved from a "research-first
Read at VentureBeat

Was this helpful?

Related