Friday, December 26, 2025
Trusted News Since 2020
American News Network
Truth. Integrity. Journalism.
US Tech & AI

Anthropic says it solved the long-running AI agent problem with a new multi-session Claude SDK

By Eric November 30, 2025

**Enhancing Agent Memory: Anthropic’s Innovative Approach with the Claude Agent SDK**

In the rapidly evolving landscape of artificial intelligence, the challenge of agent memory has emerged as a significant hurdle for enterprises. As AI agents engage in extended tasks, they often forget previous instructions or conversations, leading to inconsistencies and errors. Anthropic, a leading AI research company, has recently unveiled a promising solution through its Claude Agent SDK, which aims to address these memory limitations with a two-part approach. According to Anthropic, traditional long-running agents typically operate in discrete sessions, with each new session starting without any recollection of prior interactions. This lack of continuity can hinder complex projects that require sustained engagement over multiple sessions.

To combat this issue, Anthropic has proposed a dual-agent system consisting of an initializer agent and a coding agent. The initializer agent sets up the environment and logs actions taken by previous agents, creating a foundation for ongoing tasks. Meanwhile, the coding agent is designed to make incremental progress during each session while leaving structured updates for future sessions. This innovative framework not only enhances the agent’s ability to remember and build upon past work but also mimics the effective practices of seasoned software engineers. By incorporating testing tools, the coding agent can better identify and rectify bugs, further ensuring a smooth development process.

The significance of this advancement cannot be overstated, as it positions Anthropic’s Claude Agent SDK among other emerging memory solutions in the AI sector, such as LangChain’s LangMem SDK and OpenAI’s Swarm. As the demand for reliable and persistent AI agents grows, the need for enhanced memory solutions becomes increasingly critical. Anthropic’s research indicates that while their approach is a significant step forward, it is merely a starting point in a broader exploration of agent memory. Future studies will seek to determine whether a single general-purpose coding agent or a multi-agent structure is more effective across various contexts, potentially impacting diverse fields such as scientific research and financial modeling. As AI continues to integrate deeper into business operations, solutions like those proposed by Anthropic could redefine how organizations leverage intelligent agents for long-term projects.

Agent memory remains a problem that enterprises want to fix, as agents forget some instructions or conversations the longer they run. 
Anthropic
believes it has solved this issue for its
Claude Agent SDK
, developing a two-fold solution that allows an agent to work across different context windows.
“The core challenge of long-running agents is that they must work in discrete sessions, and each new session begins with no memory of what came before,” Anthropic wrote in
a blog post
. “Because context windows are limited, and because most complex projects cannot be completed within a single window, agents need a way to bridge the gap between coding sessions.”
Anthropic engineers proposed a two-fold approach for its Agent SDK: An initializer agent to set up the environment, and a coding agent to make incremental progress in each session and leave artifacts for the next.  
The agent memory problem
Since agents are built on foundation models, they remain constrained by the limited, although continually growing, context windows. For long-running agents, this could create a larger problem, leading the agent to forget instructions and behave abnormally while performing a task.
Enhancing agent memory
becomes essential for consistent, business-safe performance. 
Several methods emerged over the past year, all attempting to bridge the gap between context windows and agent memory.
LangChain
’s LangMem SDK,
Memobase
and
OpenAI
’s Swarm are examples of companies offering memory solutions. Research on agentic memory has also exploded recently, with proposed
frameworks like Memp
and the
Nested Learning Paradigm
from
Google
offering new alternatives to enhance memory. 
Many of the current memory frameworks are open source and can ideally adapt to different large language models (LLMs) powering agents. Anthropic’s approach improves its Claude Agent SDK. 
How it works
Anthropic identified that even though the Claude Agent SDK had context management capabilities and “should be possible for an agent to continue to do useful work for an arbitrarily long time,” it was not sufficient. The company said in its blog post that a model
like Opus 4.5
running the Claude Agent SDK can “fall short of building a production-quality web app if it’s only given a high-level prompt, such as ‘build a clone of claude.ai.’” 
The failures manifested in two patterns, Anthropic said. First, the agent tried to do too much, causing the model to run out of context in the middle. The agent then has to guess what happened and cannot pass clear instructions to the next agent. The second failure occurs later on, after some features have already been built. The agent sees progress has been made and just declares the job done. 
Anthropic researchers broke down the solution: Setting up an initial environment to lay the foundation for features and prompting each agent to make incremental progress towards a goal, while still leaving a clean slate at the end. 
This is where the two-part solution of Anthropic’s agent comes in. The initializer agent sets up the environment, logging what agents have done and which files have been added. The coding agent will then ask models to make incremental progress and leave structured updates. 
“Inspiration for these practices came from knowing what effective software engineers do every day,” Anthropic said. 
The researchers said they added testing tools to the coding agent, improving its ability to identify and fix bugs that weren’t obvious from the code alone. 
Future research
Anthropic noted that its approach is “one possible set of solutions in a long-running agent harness.” However, this is just the beginning stage of what could become a wider research area for many in the AI space. 
The company said its experiments to boost long-term memory for agents haven’t shown whether a single general-purpose coding agent works best across contexts or a multi-agent structure. 
Its demo also focused on full-stack web app development, so other experiments should focus on generalizing the results across different tasks.
“It’s likely that some or all of these lessons can be applied to the types of long-running agentic tasks required in, for example, scientific research or financial modeling,” Anthropic said.

Related Articles

The best smart rings for tracking sleep and health
US Tech & AI

The best smart rings for tracking sleep and health

Read More →
Creating a glass box: How NetSuite is engineering trust into AI
US Tech & AI

Creating a glass box: How NetSuite is engineering trust into AI

Read More →
EU investigates Google over AI-generated summaries in search results
US Tech & AI

EU investigates Google over AI-generated summaries in search results

Read More →