Monday, February 16, 2026
Trusted News Since 2020
American News Network
Truth. Integrity. Journalism.
US Tech & AI

Google’s ‘Nested Learning’ paradigm could solve AI’s memory and continual learning problem

By Eric November 23, 2025

In a groundbreaking development, researchers at Google have introduced a new artificial intelligence paradigm known as Nested Learning (NL), aimed at addressing a significant limitation of current large language models (LLMs): their inability to learn or update knowledge after the initial training phase. Traditional LLMs, such as those built on transformer architectures, are largely static once trained, meaning they cannot adapt to new information or skills acquired during interactions. Instead, they rely solely on their pre-trained knowledge and the immediate context provided in prompts, which makes them akin to individuals unable to form long-term memories. This limitation has sparked a need for innovative approaches that allow AI systems to evolve and maintain relevance in dynamic environments.

Nested Learning offers a fresh perspective by conceptualizing the training of a model not as a singular process but as a series of interconnected, multi-level optimization problems. This approach mirrors the way the human brain processes and retains information, allowing for different levels of abstraction and time scales in learning. The researchers demonstrated the efficacy of this paradigm through the development of a new model named Hope, which employs a “Continuum Memory System” (CMS). This system enables Hope to manage memory more effectively by incorporating multiple memory banks that update at varying frequencies—faster banks for immediate information and slower ones for long-term knowledge consolidation. Initial experiments have shown that Hope outperforms standard transformers in language modeling and long-context reasoning tasks, suggesting that this innovative architecture could revolutionize the way AI systems learn and adapt.

The implications of Nested Learning extend beyond mere academic interest; they hold the potential to transform AI applications across various industries. With the ability to continually learn and adapt, AI systems could better serve real-world needs, responding to ever-changing data and user requirements. However, the transition to this new paradigm is not without challenges, as current AI infrastructure is heavily optimized for traditional deep learning architectures. If successfully implemented, Nested Learning could lead to more efficient LLMs capable of ongoing learning, marking a significant step forward in the quest for truly intelligent systems that can thrive in complex, real-world environments. As this research progresses, it may pave the way for a new generation of AI that not only understands language but also evolves with it.

Researchers at Google have developed a new AI paradigm aimed at solving one of the biggest limitations in today’s large language models: their inability to learn or update their knowledge after training.
The paradigm, called
Nested Learning
, reframes a model and its training not as a single process, but as a system of nested, multi-level optimization problems. The researchers argue that this approach can unlock more expressive learning algorithms, leading to better in-context learning and memory.
To prove their concept, the researchers used Nested Learning to develop a new model, called Hope. Initial experiments show that it has superior performance on language modeling, continual learning, and long-context reasoning tasks, potentially paving the way for efficient AI systems that can adapt to real-world environments.
The memory problem of large language models
Deep learning algorithms
helped obviate the need for the careful engineering and domain expertise required by traditional machine learning. By feeding models vast amounts of data, they could learn the necessary representations on their own. However, this approach presented its own set of challenges that couldn’t be solved by simply stacking more layers or creating larger networks, such as generalizing to new data, continually learning new tasks, and avoiding suboptimal solutions during training.
Efforts to overcome these challenges led to the innovations that led to
Transformers
, the foundation of today’s large language models (LLMs). These models have ushered in “a paradigm shift from task-specific models to more general-purpose systems with various emergent capabilities as a result of scaling the ‘right’ architectures,” the researchers write. Still, a fundamental limitation remains: LLMs are largely static after training and can’t update their core knowledge or acquire new skills from new interactions.
The only adaptable component of an LLM is its
in-context learning
ability, which allows it to perform tasks based on information provided in its immediate prompt. This makes current LLMs analogous to a person who can’t form new long-term memories. Their knowledge is limited to what they learned during pre-training (the distant past) and what’s in their current context window (the immediate present). Once a conversation exceeds the context window, that information is lost forever.
The problem is that today’s transformer-based LLMs have no mechanism for “online” consolidation. Information in the context window never updates the model’s long-term parameters — the weights stored in its feed-forward layers. As a result, the model can’t permanently acquire new knowledge or skills from interactions; anything it learns disappears as soon as the context window rolls over.
A nested approach to learning
Nested Learning (NL) is designed to allow computational models to learn from data using different levels of abstraction and time-scales, much like the brain. It treats a single machine learning model not as one continuous process, but as a system of interconnected learning problems that are optimized simultaneously at different speeds. This is a departure from the classic view, which treats a model’s architecture and its optimization algorithm as two separate components.
Under this paradigm, the training process is viewed as developing an “associative memory,” the ability to connect and recall related pieces of information. The model learns to map a data point to its local error, which measures how “surprising” that data point was. Even key architectural components like the attention mechanism in transformers can be seen as simple associative memory modules that learn mappings between tokens. By defining an update frequency for each component, these nested optimization problems can be ordered into different “levels,” forming the core of the NL paradigm.
Hope for continual learning
The researchers put these principles into practice with Hope, an architecture designed to embody Nested Learning. Hope is a modified version of
Titans
, another architecture Google introduced in January to address the transformer model’s memory limitations. While Titans had a powerful memory system, its parameters were updated at only two different speeds: a long-term memory module and a short-term memory mechanism.
Hope is a self-modifying architecture augmented with a “Continuum Memory System” (CMS) that enables unbounded levels of in-context learning and scales to larger context windows. The CMS acts like a series of memory banks, each updating at a different frequency. Faster-updating banks handle immediate information, while slower ones consolidate more abstract knowledge over longer periods. This allows the model to optimize its own memory in a self-referential loop, creating an architecture with theoretically infinite learning levels.
On a diverse set of language modeling and common-sense reasoning tasks, Hope demonstrated lower perplexity (a measure of how well a model predicts the next word in a sequence and maintains coherence in the text it generates) and higher accuracy compared to both standard transformers and other modern recurrent models. Hope also performed better on long-context “Needle-In-Haystack” tasks, where a model must find and use a specific piece of information hidden within a large volume of text. This suggests its CMS offers a more efficient way to handle long information sequences.
This is one of several efforts to create AI systems that process information at different levels.
Hierarchical Reasoning Model
(HRM) by Sapient Intelligence, used a hierarchical architecture to make the model more efficient in learning reasoning tasks.
Tiny Reasoning Model
(TRM), a model by Samsung, improves HRM by making architectural changes, improving its performance while making it more efficient.
While promising, Nested Learning faces some of the same challenges of these other paradigms in realizing its full potential. Current AI hardware and software stacks are heavily optimized for classic deep learning architectures and Transformer models in particular. Adopting Nested Learning at scale may require fundamental changes. However, if it gains traction, it could lead to far more efficient LLMs that can continually learn, a capability crucial for real-world enterprise applications where environments, data, and user needs are in constant flux.

Related Articles

The best smart rings for tracking sleep and health
US Tech & AI

The best smart rings for tracking sleep and health

Read More →
Creating a glass box: How NetSuite is engineering trust into AI
US Tech & AI

Creating a glass box: How NetSuite is engineering trust into AI

Read More →
EU investigates Google over AI-generated summaries in search results
US Tech & AI

EU investigates Google over AI-generated summaries in search results

Read More →