During sleep, the The human brain solves different memories, consolidating -of importantly, while discarding those who do not care. What if Ai could do the same?
Bilt, a company that offers local shopping and restaurants offers to tenants, recently deployed several million agents in the hope of doing so.
Bilt uses technology from a startup called Letta that allows agents to learn from previous conversations and share memories with each other. Using a process called “SleEptime calculation”, agents decide what information to store in their long -term memory vault and what may be needed for a faster memory.
“We can do a single update to a blog (memory) and change the behavior of hundreds of thousands of agents,” says Andrew Fitz, a BILT AI engineer. “This is useful in any scenario where you want a fine grain control over the context of agents,” he adds, referring to the text message powered by the model in inference times.
Large language models can usually only “remember” things if information is included in the context window. If you want a chatbot to remember your most recent conversation, you need to paste it on the chat.
Most AI systems can only handle a limited amount of information in the context window before their ability to use data failures and are alucinated or confused. The human brain, on the contrary, is able to present useful information and remember -later.
“Your brain continually improves, adding more information as a sponge,” says Charles Packer, CEO of Letta. “With language models, it is like the opposite.
Packer and co -founder Sarah Wooders developed Memgpt, an open source project that aimed to help the LLM decide which information should be stored in short -term and long -term memory. With Letta, the duo has expanded its approach for agents to learn in the background.
Bilta’s collaboration with Letta is part of a wider push to give the AS the possibility of storing and remembering useful information, which could make Chatbots smarter and agents less errors. The memory is still underdeveloped in the modern AI, which reduces the intelligence and reliability of the tools of the AI, according to the experts I spoke to.
Harrison Chase, co -founder and CEO of Langchain, another company that has developed a method of improving memory in AI agents, says he sees memory as a vital part of context engineering, where a user or engineer decides what information will feed on the context window. Langchain offers companies various types of memory storage for agents, from long -term events on users to memories of recent experiences. “Memory, I would say it is a form of context,” says Chase. “A large part of the work of an AI engineer is basically obtaining the model the appropriate context (information).”
Consumer AI tools are also gradually forgotten. This February, Openai announced that Chatgpt will store relevant information in order to provide a more personalized experience to users, although the company did not reveal its operation.
Letta and Langchain make the memory process more transparent for engineers who build AI systems.
“I think it is very important not only that the models are open, but also for the memory systems to be open,” says Clem Delangue, CEO of the Ai Ai -Acotation Platform who embraces the face and an investor in Letta.
Intrigally, Letta’s CEO, a package, suggests that it can also be important for AI models to learn what to forget. “If a user says: ‘That project we were working on, delete it from the memory,’ ‘the agent should be able to go back and retroactively rewrite each memory. “”
The notion of artificial memories and dreams makes me think Androids dream of electric sheep? by Philip K. Dick, a flexion novel of the mind that inspired the movie and dystopian movie Blasse runner. Large language models are still not as impressive as the rebel replicants of history, but their memories, apparently, may be as fragile.
This is an edition of Will Knight’s Ai Lab Bulletin Bulletin. Read the previous newsletters Here.