Google DeepMind CEO Demis Hassabis says AI scaling ‘must be pushed to the maximum’
In a recent discussion at the Axios AI+ Summit in San Francisco, Demis Hassabis, CEO of Google DeepMind, emphasized the critical role of scaling laws in advancing artificial intelligence (AI) technology. With the launch of Gemini 3, which has garnered significant attention, Hassabis reiterated his belief that maximizing the scaling of current AI systems is essential for achieving Artificial General Intelligence (AGI)—a theoretical AI that can reason and think like humans. “The scaling of the current systems, we must push that to the maximum,” he stated, suggesting that scaling could be the key component, if not the entirety, of a future AGI system. This perspective aligns with the prevailing view in the tech industry that increasing the amount of data and computational power available to AI models directly correlates with their intelligence and capabilities.
However, this notion of scaling as the primary path to AGI has sparked a debate within the AI community. Critics, including prominent figures like Yann LeCun, chief AI scientist at Meta, argue that there are significant limitations to this approach. LeCun, who is transitioning to launch his own startup focused on developing world models—an alternative to the current large-language models—has pointed out that “most interesting problems scale extremely badly.” He believes that simply increasing data and compute resources does not guarantee smarter AI. Instead, his startup aims to create systems that can better understand the physical world, possess persistent memory, and execute complex reasoning and planning tasks. This divergence in opinions highlights a growing concern that the AI industry may be reaching diminishing returns on its investments in scaling, prompting a search for innovative breakthroughs beyond mere data accumulation and computational power.
The conversation surrounding AI scaling laws is not just theoretical; it reflects the broader challenges faced by tech companies in their quest for AGI. As the industry pours vast sums into infrastructure and talent, the environmental impact of building extensive data centers and the finite nature of publicly available data present significant hurdles. While Hassabis remains optimistic about the potential of scaling, he acknowledges that achieving AGI may require “one or two” additional breakthroughs. This ongoing debate underscores the dynamic landscape of AI development, where innovation and new approaches may be necessary to push the boundaries of what artificial intelligence can achieve. As the industry evolves, the exploration of alternative methodologies could pave the way for the next significant advancements in AI, shifting the focus from sheer scaling to more holistic and effective strategies for intelligence development.
https://www.youtube.com/watch?v=MhNcWxUs-PQ
Demis Hassabis, Google DeepMind CEO
Dan Kitwood/Getty Images
Google DeepMind CEO Demis Hassabis says scaling laws are vital to the tech’s progress.
Scaling requires feeding AI models ever more data and more compute.
Some other AI leaders, however, believe the industry needs to find another way.
There’s a debate rippling through Silicon Valley: How far can
scaling laws
take the technology?
Google DeepMind CEO
Demis Hassabis
, whose company just released Gemini 3 to widespread acclaim, has made it clear where he stands on the issue.
“The scaling of the current systems, we must push that to the maximum, because at the minimum, it will be a key component of the final AGI system,” he said at the Axios’ AI+ Summit in San Francisco last week. “It could be the entirety of the AGI system.”
AGI, or
artificial general intelligence
, is a still theoretical version of AI that reasons as well as humans. It’s the goal all the leading AI companies are competing to reach, fueling
huge amounts of spending
on infrastructure and talent.
AI scaling laws suggest that the more data and compute an AI model is given, the smarter it will get.
Hassabis said that scaling alone will likely get the industry to AGI, but that he suspects there will need to be”one or two” other breakthroughs as well.
The problem with scaling alone is that there is a limit to publicly available data, and adding compute means
building data centers
, which is expensive and taxing on the environment.
Some AI watchers are also concerned that the AI companies behind the leading large-language models are beginning to
show diminishing returns
on their massive investments in scaling.
Researchers like
Yann LeCun
, the chief AI scientist at Meta who recently announced he was leaving to run his own startup, believe the industry needs to consider another way.
“Most interesting problems scale extremely badly,” he said at the National University of Singapore in April. “You cannot just assume that more data and more compute means smarter AI.”
LeCun is leaving Meta to work on building world models, an alternative to large-language models that rely on collecting spatial data rather than language-based data.
“The goal of the startup is to bring about the next big revolution in AI: systems that understand the physical world, have persistent memory, can reason, and can plan complex action sequences,” he wrote on LinkedIn in November.
Read the original article on
Business Insider