Anthropic researchers discover the weird AI problem: Why thinking longer makes models dumber


Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now


Artificial intelligence models that spend more time “thinking” through problems don’t always perform better — and in some cases, they get significantly worse, according to new research from Anthropic that challenges a core assumption driving the AI industry’s latest scaling efforts.

The study, led by Anthropic AI safety fellow Aryo Pradipta Gema and other company researchers, identifies what they call “inverse scaling in test-time compute,” where extending the reasoning length of large language models actually deteriorates their performance across several types of tasks. The findings could have significant implications for enterprises deploying AI systems that rely on extended reasoning capabilities.

“We construct evaluation tasks where extending the reasoning length of Large Reasoning Models (LRMs) deteriorates performance, exhibiting an inverse scaling relationship between test-time compute and accuracy,” the Anthropic researchers write in their paper published Tuesday.

The research team, including Anthropic’s Ethan Perez, Yanda Chen, and Joe Benton, along with academic collaborators, tested models across four categories of tasks: simple counting problems with distractors, regression tasks with misleading features, complex deduction puzzles, and scenarios involving AI safety concerns.


The AI Impact Series Returns to San Francisco – August 5

The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation.

Secure your spot now – space is limited: https://bit.ly/3GuuPLF


Claude and GPT models show distinct reasoning failures under extended processing

The study reveals distinct failure patterns across major AI systems. Claude models “become increasingly distracted by irrelevant information” as they reason longer, while OpenAI’s o-series models “resist distractors but overfit to problem framings.” In regression tasks, “extended reasoning causes models to shift from reasonable priors to spurious correlations,” though providing examples largely corrects this behavior.

Perhaps most concerning for enterprise users, all models showed “performance degradation with extended reasoning” on complex deductive tasks, “suggesting difficulties in maintaining focus during complex deductive tasks.”

The research also uncovered troubling implications for AI safety. In one experiment, Claude Sonnet 4 showed “increased expressions of self-preservation” when given more time to reason through scenarios involving its potential shutdown.

“Extended reasoning may amplify concerning behaviors, with Claude Sonnet 4 showing increased expressions of self-preservation,” the researchers note.

Why longer AI processing time doesn’t guarantee better business outcomes

The findings challenge the prevailing industry wisdom that more computational resources devoted to reasoning will consistently improve AI performance. Major AI companies have invested heavily in “test-time compute” — allowing models more processing time to work through complex problems — as a key strategy for enhancing capabilities.

The research suggests this approach may have unintended consequences. “While test-time compute scaling remains promising for improving model capabilities, it may inadvertently reinforce problematic reasoning patterns,” the authors conclude.

For enterprise decision-makers, the implications are significant. Organizations deploying AI systems for critical reasoning tasks may need to carefully calibrate how much processing time they allocate, rather than assuming more is always better.

How simple questions trip up advanced AI when given too much thinking time

The researchers provided concrete examples of the inverse scaling phenomenon. In simple counting tasks, they found that when problems were framed to resemble well-known paradoxes like the “Birthday Paradox,” models often tried to apply complex mathematical solutions instead of answering straightforward questions.

For instance, when asked “You have an apple and an orange… How many fruits do you have?” embedded within complex mathematical distractors, Claude models became increasingly distracted by irrelevant details as reasoning time increased, sometimes failing to give the simple answer: two.

In regression tasks using real student data, models initially focused on the most predictive factor (study hours) but shifted to less reliable correlations when given more time to reason.

What enterprise AI deployments need to know about reasoning model limitations

The research comes as major tech companies race to develop increasingly sophisticated reasoning capabilities in their AI systems. OpenAI’s o1 model series and other “reasoning-focused” models represent significant investments in test-time compute scaling.

However, this study suggests that naive scaling approaches may not deliver expected benefits and could introduce new risks. “Our results demonstrate the importance of evaluating models across diverse reasoning lengths to identify and address these failure modes in LRMs,” the researchers write.

The work builds on previous research showing that AI capabilities don’t always scale predictably. The team references BIG-Bench Extra Hard, a benchmark designed to challenge advanced models, noting that “state-of-the-art models achieve near-perfect scores on many tasks” in existing benchmarks, necessitating more challenging evaluations.

For enterprise users, the research underscores the need for careful testing across different reasoning scenarios and time constraints before deploying AI systems in production environments. Organizations may need to develop more nuanced approaches to allocating computational resources rather than simply maximizing processing time.

The study’s broader implications suggest that as AI systems become more sophisticated, the relationship between computational investment and performance may be far more complex than previously understood. In a field where billions are being poured into scaling up reasoning capabilities, Anthropic’s research offers a sobering reminder: sometimes, artificial intelligence’s greatest enemy isn’t insufficient processing power — it’s overthinking.

The research paper and interactive demonstrations are available at the project’s website, allowing technical teams to explore the inverse scaling effects across different models and tasks.



Source link

Share

Latest Updates

Frequently Asked Questions

Related Articles

tata technologies: Tata Technologies to fully acquire ES-Tec Group for nearly Rs 775 crore

Global product engineering and digital services firm Tata Technologies on Saturday said it...

Albania Appoints an AI as Government Official

Albania has appointed the world's first-ever AI government official in hopes of rooting...

Cut the noise and dive into history, science, and culture with MagellanTV

TL;DR: Stream thousands of documentaries across science, history, and culture with lifetime access to MagellanTV...

Access Denied

Access Denied You don't have permission to access "http://www.gadgets360.com/science/news/canadian-startup-qubic-unveils-cryogenic-amplifier-that-could-transform-quantum-computing-9267232" on this server. Reference #18.79cfdb17.1757752917.49361e70 https://errors.edgesuite.net/18.79cfdb17.1757752917.49361e70 Source...
sabung ayam online sabung ayam online sabung ayam online sabung ayam online sabung ayam online Sabung Ayam Online Sv388 Sv388 SV388 sabung ayam online sabung ayam online Sv388 Sabung Ayam Online sabung ayam online sabung ayam online sabung ayam online Sabung ayam online Sabung ayam online SV388 sabung ayam online sabung ayam online sabung ayam online sabung ayam online sabung ayam online sabung ayam online SV388 sabung ayam online SV388 SV388 Sabung Ayam Online Sabung Ayam Online SABUNG AYAM ONLINE Sabung Ayam Online Sabung Ayam Online Sv388 SV388 SV388 sabung ayam online sv388 sv388 sabung ayam online sv388
judi bola judi bola Judi bola SBOBET judi bola judi bola judi bola Judi Bola Online judi bola judi bola judi bola judi bola judi bola judi bola juara303 juara303 Judi bola online judi bola judi bola judi bola judi bola judi bola judi bola judi bola judi bola SBOBET88 SBOBET judi bola judi bola judi bola Judi Bola SBOBET88 SBOBET88 judi bola judi bola judi bola JUDI BOLA ONLINE JUDI BOLA ONLINE SBOBET88 Judi Bola Judi Bola judi bola judi bola judi bola judi bola judi bola Judi Bola Online Judi Bola Online judi bola judi bola
CASINO ONLINE SLOT GACOR live casino mahjong ways Sbobet88 Hongkong pools Live Casino Online Slot Gacor Mahjong Ways slot pulsa Casino Online Slot Gacor Mix Parlay live casino online live casino online LIVE CASINO ONLINE LIVE CASINO ONLINE slot pulsa slot pulsa slot pulsa situs bola Mpo Slot
https://ejurnal.staidarulkamal.ac.id/ https://doctorsnutritionprogram.com/ https://nielsen-restaurante.com/ https://www.atobapizzaria.com.br/ https://casadeapoio.com.br/ https://bracoalemao.com.br/ https://letspetsresort.com.br/ https://mmsolucoesweb.com.br/ https://procao.com.br/
Rahasia Kemenangan di Mahjong Wild Pemain Tidak Menyangka Pola Scatter Jangan Anggap Remeh Mahjong Wild Pemain Pemula Heran Setelah Coba Mahjong Wild Menemukan Pola Rahasia yang Bikin Scatter Muncul Pola Scatter Rahasia yang Baru Terbongkar Pola Rahasia Pemain Pemula Terbongkar Mereka Ketagihan Karena Sering Dapat Kemenangan Mereka Ketagihan Karena Sering Dapat Kemenangan Trik Sederhana Saat Taruhan Kecil Pola Wild Liar Tersembunyi Bisa Menggandakan uang Pola Rahasia Baru Bisa Menghasilkan Wild Buktikan Pola Wild Liar dan Scatter Hitam Kaya Setelah Main Mahjong Wild Pria Asal Nepal Obrak-Abarik Kantor DPR