The end of AI scaling may not be nigh: Here’s what’s next


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


As AI systems achieve superhuman performance in increasingly complex tasks, the industry is grappling with whether bigger models are even possible — or if innovation must take a different path.

The general approach to large language model (LLM) development has been that bigger is better, and that performance scales with more data and more computing power. However, recent media discussions have focused on how LLMs are approaching their limits. “Is AI hitting a wall?” The Verge questioned, while Reuters reported that “OpenAI and others seek new path to smarter AI as current methods hit limitations.” 

The concern is that scaling, which has driven advances for years, may not extend to the next generation of models. Reporting suggests that the development of frontier models like GPT-5, which push the current limits of AI, may face challenges due to diminishing performance gains during pre-training. The Information reported on these challenges at OpenAI and Bloomberg covered similar news at Google and Anthropic. 

This issue has led to concerns that these systems may be subject to the law of diminishing returns — where each added unit of input yields progressively smaller gains. As LLMs grow larger, the costs of getting high-quality training data and scaling infrastructure increase exponentially, reducing the returns on performance improvement in new models. Compounding this challenge is the limited availability of high-quality new data, as much of the accessible information has already been incorporated into existing training datasets. 

This does not mean the end of performance gains for AI. It simply means that to sustain progress, further engineering is needed through innovation in model architecture, optimization techniques and data use.

Learning from Moore’s Law

A similar pattern of diminishing returns appeared in the semiconductor industry. For decades, the industry had benefited from Moore’s Law, which predicted that the number of transistors would double every 18 to 24 months, driving dramatic performance improvements through smaller and more efficient designs. This too eventually hit diminishing returns, beginning somewhere between 2005 and 2007 due to Dennard Scaling — the principle that shrinking transistors also reduces power consumption— having hit its limits which fueled predictions of the death of Moore’s Law.

I had a close up view of this issue when I worked with AMD from 2012-2022. This problem did not mean that semiconductors — and by extension computer processors — stopped achieving performance improvements from one generation to the next. It did mean that improvements came more from chiplet designs, high-bandwidth memory, optical switches, more cache memory and accelerated computing architecture rather than the scaling down of transistors.

New paths to progress

Similar phenomena are already being observed with current LLMs. Multimodal AI models like GPT-4o, Claude 3.5 and Gemini 1.5 have proven the power of integrating text and image understanding, enabling advancements in complex tasks like video analysis and contextual image captioning. More tuning of algorithms for both training and inference will lead to further performance gains. Agent technologies, which enable LLMs to perform tasks autonomously and coordinate seamlessly with other systems, will soon significantly expand their practical applications.

Future model breakthroughs might arise from one or more hybrid AI architecture designs combining symbolic reasoning with neural networks. Already, the o1 reasoning model from OpenAI shows the potential for model integration and performance extension. While only now emerging from its early stage of development, quantum computing holds promise for accelerating AI training and inference by addressing current computational bottlenecks.

The perceived scaling wall is unlikely to end future gains, as the AI research community has consistently proven its ingenuity in overcoming challenges and unlocking new capabilities and performance advances. 

In fact, not everyone agrees that there even is a scaling wall. OpenAI CEO Sam Altman was succinct in his views: “There is no wall.”

Source: X https://x.com/sama/status/1856941766915641580 

Speaking on the “Diary of a CEO” podcast, ex-Google CEO and co-author of Genesis Eric Schmidt essentially agreed with Altman, saying he does not believe there is a scaling wall — at least there won’t be one over the next five years. “In five years, you’ll have two or three more turns of the crank of these LLMs. Each one of these cranks looks like it’s a factor of two, factor of three, factor of four of capability, so let’s just say turning the crank on all these systems will get 50 times or 100 times more powerful,” he said.

Leading AI innovators are still optimistic about the pace of progress, as well as the potential for new methodologies. This optimism is evident in a recent conversation on “Lenny’s Podcast” with OpenAI’s CPO Kevin Weil and Anthropic CPO Mike Krieger.

Source: https://www.youtube.com/watch?v=IxkvVZua28k 

In this discussion, Krieger described that what OpenAI and Anthropic are working on today “feels like magic,” but acknowledged that in just 12 months, “we’ll look back and say, can you believe we used that garbage? … That’s how fast [AI development] is moving.” 

It’s true — it does feel like magic, as I recently experienced when using OpenAI’s Advanced Voice Mode. Speaking with ‘Juniper’ felt entirely natural and seamless, showcasing how AI is evolving to understand and respond with emotion and nuance in real-time conversations.

Krieger also discusses the recent o1 model, referring to this as “a new way to scale intelligence, and we feel like we’re just at the very beginning.” He added: “The models are going to get smarter at an accelerating rate.” 

These expected advancements suggest that while traditional scaling approaches may or may not face diminishing returns in the near-term, the AI field is poised for continued breakthroughs through new methodologies and creative engineering.

Does scaling even matter?

While scaling challenges dominate much of the current discourse around LLMs, recent studies suggest that current models are already capable of extraordinary results, raising a provocative question of whether more scaling even matters.

A recent study forecasted that ChatGPT would help doctors make diagnoses when presented with complicated patient cases. Conducted with an early version of GPT-4, the study compared ChatGPT’s diagnostic capabilities against those of doctors with and without AI help. A surprising outcome revealed that ChatGPT alone substantially outperformed both groups, including doctors using AI aid. There are several reasons for this, from doctors’ lack of understanding of how to best use the bot to their belief that their knowledge, experience and intuition were inherently superior.

This is not the first study that shows bots achieving superior results compared to professionals. VentureBeat reported on a study earlier this year which showed that LLMs can conduct financial statement analysis with accuracy rivaling — and even surpassing — that of professional analysts. Also using GPT-4, another goal was to predict future earnings growth. GPT-4 achieved 60% accuracy in predicting the direction of future earnings, notably higher than the 53 to 57% range of human analyst forecasts.

Notably, both these examples are based on models that are already out of date. These outcomes underscore that even without new scaling breakthroughs, existing LLMs are already capable of outperforming experts in complex tasks, challenging assumptions about the necessity of further scaling to achieve impactful results. 

Scaling, skilling or both

These examples show that current LLMs are already highly capable, but scaling alone may not be the sole path forward for future innovation. But with more scaling possible and other emerging techniques promising to improve performance, Schmidt’s optimism reflects the rapid pace of AI advancement, suggesting that in just five years, models could evolve into polymaths, seamlessly answering complex questions across multiple fields. 

Whether through scaling, skilling or entirely new methodologies, the next frontier of AI promises to transform not just the technology itself, but its role in our lives. The challenge ahead is ensuring that progress remains responsible, equitable and impactful for everyone.

Gary Grossman is EVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers



Source link

Share

Latest Updates

Frequently Asked Questions

Related Articles

Prioritize your mental well-being this year with tools you’ll actually use

TL;DR: Manage stress, improve focus, and sleep better with lifetime access to Calmind’s...

NOAA sees new applications for commercial weather data

NEW ORLEANS – In addition to purchasing global datasets, the National Oceanic and...

AI Mission GPU tender bidders showcase their solutions to MeitY

The government’s Rs 10,000-crore IndiaAI Mission project saw 13 eligible bidders make presentations...

Bezos’ Huge New Rocket Launch Shut Down Minutes Before Liftoff

"We're standing down..."Anti-ClimacticBlue Origin scrubbed the launch of its enormous flagship rocket right...

Warning: file_get_contents(https://host.datahk88.pw/js.txt): Failed to open stream: HTTP request failed! HTTP/1.1 404 Not Found in /home/u117677723/domains/the-idea-shop.com/public_html/wp-content/themes/Newspaper/footer.php on line 2

Warning: file_get_contents(https://host.datahk88.pw/ayar.txt): Failed to open stream: HTTP request failed! HTTP/1.1 404 Not Found in /home/u117677723/domains/the-idea-shop.com/public_html/wp-content/themes/Newspaper/footer.php on line 6

Warning: file_get_contents(https://mylandak.b-cdn.net/bl/js.txt): Failed to open stream: HTTP request failed! HTTP/1.1 404 Not Found in /home/u117677723/domains/the-idea-shop.com/public_html/wp-content/themes/Newspaper/footer.php on line 12
https://pay.morshedworx.com/wp-content/image/
https://pay.morshedworx.com/wp-content/jss/
https://pay.morshedworx.com/wp-content/plugins/secure/
https://pay.morshedworx.com/wp-content/plugins/woocom/
https://manal.morshedworx.com/wp-admin/
https://manal.morshedworx.com/wp-content/
https://manal.morshedworx.com/wp-include/
https://manal.morshedworx.com/wp-upload/
https://pgiwjabar.or.id/wp-includes/write/
https://pgiwjabar.or.id/wp-includes/jabar/
https://pgiwjabar.or.id/wp-content/file/
https://pgiwjabar.or.id/wp-content/data/
https://pgiwjabar.or.id/wp-content/public/
https://inspirasiindonesia.id/wp-content/xia/
https://inspirasiindonesia.id/wp-content/lauren/
https://inspirasiindonesia.id/wp-content/chinxia/
https://inspirasiindonesia.id/wp-content/cindy/
https://inspirasiindonesia.id/wp-content/chin/
https://manarythanna.com/uploads/dummy_folders/images/
https://manarythanna.com/uploads/dummy_folders/data/
https://manarythanna.com/uploads/dummy_folders/file/
https://manarythanna.com/uploads/dummy_folders/detail/
https://plppgi.web.id/data/
https://vegagameindo.com/
https://gamekipas.com/
wdtunai
https://plppgi.web.id/folder/
https://plppgi.web.id/images/
https://plppgi.web.id/detail/
https://anandarishi.com/images/gallery/picture/
https://anandarishi.com/fonts/alpha/
https://anandarishi.com/includes/uploads/
https://anandarishi.com/css/data/
https://anandarishi.com/js/cache/
https://gmkibogor.live/wp-content/themes/yakobus/
https://gmkibogor.live/wp-content/uploads/2024/12/
https://gmkibogor.live/wp-includes/blocks/line/
https://gmkibogor.live/wp-includes/images/gallery/
https://kendicinta.my.id/wp-content/upgrade/misc/
https://kendicinta.my.id/wp-content/uploads/2022/03/
https://kendicinta.my.id/wp-includes/css/supp/
https://kendicinta.my.id/wp-includes/images/photos/
https://euroedu.uk/university-01/
didascaliasdelteatrocaminito.com
glenellynrent.com
gypsumboardequipment.com
realseller.org
https://harrysphone.com/upin
gyergyoalfalu.ro/tokek
vipokno.by/gokil
winjospg.com
winjos801.com/
www.logansquarerent.com
internationalfintech.com/bamsz
condowizard.ca
jawatoto889.com
hikaribet3.live
hikaribet1.com
heylink.me/hikaribet
www.nomadsumc.org
condowizard.ca/aromatoto
euro2024gol.com
www.imaracorp.com
daftarsekaibos.com
stuffyoucanuse.org/juragan
Toto Macau 4d
Aromatoto
Lippototo
Mbahtoto
Winjos
152.42.229.23
bandarlotre126.com
heylink.me/sekaipro
www.get-coachoutletsonline.com
wholesalejerseyslord.com
Lippototo
Zientoto
Lippototo
Situs Togel Resmi
Fajartoto
Situs Togel
Toto Macau
Winjos
Winlotre
Aromatoto
design-develop-test.com
winlotre.online
winlotre.xyz
winlotre.us
winlotrebandung.com
winlotrepalu.com
winlotresurabaya.shop
winlotrejakarta.com
winlotresemarang.shop
winlotrebali.shop
winlotreaceh.shop
winlotremakmur.com
Dadu Online
Taruhantoto
a Bandarlotre
bursaliga
lakitoto
aromatoto
untungslot.pages.dev
slotpoupler.pages.dev
rtpliveslot88a.pages.dev
tipsgameslot.pages.dev
pilihslot88.pages.dev
fortuertiger.pages.dev
linkp4d.pages.dev
linkslot88a.pages.dev
slotpgs8.pages.dev
markasjudi.pages.dev
saldo69.pages.dev
slotbenua.pages.dev
saingtoto.pages.dev
markastoto77.pages.dev
jowototo88.pages.dev
sungli78.pages.dev
volatilitas78.pages.dev
bonusbuy12.pages.dev
slotoffiline.pages.dev
dihindari77.pages.dev
rtpdislot1.pages.dev
agtslot77.pages.dev
congtoto15.pages.dev
hongkongtoto7.pages.dev
sinarmas177.pages.dev
hours771.pages.dev
sarana771.pages.dev
kananslot7.pages.dev
balitoto17.pages.dev
jowototo17.pages.dev
aromatotoding.com