Thursday, February 19, 2026
Trusted News Since 2020
American News Network
Truth. Integrity. Journalism.
US Tech & AI

Google’s ‘Nested Learning’ paradigm could solve AI’s memory and continual learning problem

By Eric November 22, 2025

Researchers at Google have made significant strides in addressing a major limitation of current large language models (LLMs) with their innovative AI paradigm called Nested Learning (NL). Traditional LLMs, while powerful, are largely static after their initial training, unable to update their knowledge or acquire new skills from ongoing interactions. This limitation is akin to a person who can’t form new long-term memories, relying solely on past knowledge and immediate context. The Nested Learning paradigm proposes a novel approach to model training, reframing it as a system of interconnected, multi-level optimization problems. This allows for a more dynamic learning process that mimics the brain’s ability to learn and adapt over time.

To demonstrate the effectiveness of Nested Learning, the researchers developed a new model named Hope, which builds on Google’s earlier architecture, Titans. Hope incorporates a “Continuum Memory System” (CMS) that facilitates unbounded in-context learning and can scale to handle larger context windows. This CMS operates like a series of memory banks that update at varying frequencies, allowing the model to manage immediate information while consolidating abstract knowledge over time. Initial experiments indicate that Hope outperforms traditional transformers in language modeling, continual learning, and long-context reasoning tasks. For instance, it shows lower perplexity—a measure of how well a model predicts subsequent words—and higher accuracy in tasks requiring the retrieval of specific information from extensive text, such as the “Needle-In-Haystack” challenge.

While the concept of Nested Learning holds promise for creating more adaptable AI systems, it does face challenges. Current AI infrastructure is heavily optimized for conventional deep learning and transformer architectures, necessitating significant adjustments to fully leverage the advantages of NL. If successfully implemented at scale, however, this new paradigm could revolutionize the efficiency of LLMs, enabling them to continuously learn and adapt to the ever-evolving demands of real-world applications. This advancement could be crucial for enterprises that require AI systems capable of responding to changing environments and user needs dynamically. As the research progresses, Nested Learning may pave the way for a new generation of AI that not only understands language but also learns from it in a more human-like manner.

Researchers at Google have developed a new AI paradigm aimed at solving one of the biggest limitations in today’s large language models: their inability to learn or update their knowledge after training.
The paradigm, called
Nested Learning
, reframes a model and its training not as a single process, but as a system of nested, multi-level optimization problems. The researchers argue that this approach can unlock more expressive learning algorithms, leading to better in-context learning and memory.
To prove their concept, the researchers used Nested Learning to develop a new model, called Hope. Initial experiments show that it has superior performance on language modeling, continual learning, and long-context reasoning tasks, potentially paving the way for efficient AI systems that can adapt to real-world environments.
The memory problem of large language models
Deep learning algorithms
helped obviate the need for the careful engineering and domain expertise required by traditional machine learning. By feeding models vast amounts of data, they could learn the necessary representations on their own. However, this approach presented its own set of challenges that couldn’t be solved by simply stacking more layers or creating larger networks, such as generalizing to new data, continually learning new tasks, and avoiding suboptimal solutions during training.
Efforts to overcome these challenges led to the innovations that led to
Transformers
, the foundation of today’s large language models (LLMs). These models have ushered in “a paradigm shift from task-specific models to more general-purpose systems with various emergent capabilities as a result of scaling the ‘right’ architectures,” the researchers write. Still, a fundamental limitation remains: LLMs are largely static after training and can’t update their core knowledge or acquire new skills from new interactions.
The only adaptable component of an LLM is its
in-context learning
ability, which allows it to perform tasks based on information provided in its immediate prompt. This makes current LLMs analogous to a person who can’t form new long-term memories. Their knowledge is limited to what they learned during pre-training (the distant past) and what’s in their current context window (the immediate present). Once a conversation exceeds the context window, that information is lost forever.
The problem is that today’s transformer-based LLMs have no mechanism for “online” consolidation. Information in the context window never updates the model’s long-term parameters — the weights stored in its feed-forward layers. As a result, the model can’t permanently acquire new knowledge or skills from interactions; anything it learns disappears as soon as the context window rolls over.
A nested approach to learning
Nested Learning (NL) is designed to allow computational models to learn from data using different levels of abstraction and time-scales, much like the brain. It treats a single machine learning model not as one continuous process, but as a system of interconnected learning problems that are optimized simultaneously at different speeds. This is a departure from the classic view, which treats a model’s architecture and its optimization algorithm as two separate components.
Under this paradigm, the training process is viewed as developing an “associative memory,” the ability to connect and recall related pieces of information. The model learns to map a data point to its local error, which measures how “surprising” that data point was. Even key architectural components like the attention mechanism in transformers can be seen as simple associative memory modules that learn mappings between tokens. By defining an update frequency for each component, these nested optimization problems can be ordered into different “levels,” forming the core of the NL paradigm.
Hope for continual learning
The researchers put these principles into practice with Hope, an architecture designed to embody Nested Learning. Hope is a modified version of
Titans
, another architecture Google introduced in January to address the transformer model’s memory limitations. While Titans had a powerful memory system, its parameters were updated at only two different speeds: a long-term memory module and a short-term memory mechanism.
Hope is a self-modifying architecture augmented with a “Continuum Memory System” (CMS) that enables unbounded levels of in-context learning and scales to larger context windows. The CMS acts like a series of memory banks, each updating at a different frequency. Faster-updating banks handle immediate information, while slower ones consolidate more abstract knowledge over longer periods. This allows the model to optimize its own memory in a self-referential loop, creating an architecture with theoretically infinite learning levels.
On a diverse set of language modeling and common-sense reasoning tasks, Hope demonstrated lower perplexity (a measure of how well a model predicts the next word in a sequence and maintains coherence in the text it generates) and higher accuracy compared to both standard transformers and other modern recurrent models. Hope also performed better on long-context “Needle-In-Haystack” tasks, where a model must find and use a specific piece of information hidden within a large volume of text. This suggests its CMS offers a more efficient way to handle long information sequences.
This is one of several efforts to create AI systems that process information at different levels.
Hierarchical Reasoning Model
(HRM) by Sapient Intelligence, used a hierarchical architecture to make the model more efficient in learning reasoning tasks.
Tiny Reasoning Model
(TRM), a model by Samsung, improves HRM by making architectural changes, improving its performance while making it more efficient.
While promising, Nested Learning faces some of the same challenges of these other paradigms in realizing its full potential. Current AI hardware and software stacks are heavily optimized for classic deep learning architectures and Transformer models in particular. Adopting Nested Learning at scale may require fundamental changes. However, if it gains traction, it could lead to far more efficient LLMs that can continually learn, a capability crucial for real-world enterprise applications where environments, data, and user needs are in constant flux.

Related Articles

The best smart rings for tracking sleep and health
US Tech & AI

The best smart rings for tracking sleep and health

Read More →
Creating a glass box: How NetSuite is engineering trust into AI
US Tech & AI

Creating a glass box: How NetSuite is engineering trust into AI

Read More →
EU investigates Google over AI-generated summaries in search results
US Tech & AI

EU investigates Google over AI-generated summaries in search results

Read More →