Memory-Augmented Large Language Models
Published:
Why and How Memory Matters for LLMs?
Table of Content
- What are Memory-Augmented Neural Networks (MANNs)?
- A Brief History of MANNs
- What’s Holding Back MANNs?
- The Rise of Memory in the LLM Era
- Why Memory Craves LLMs?
- Why LLMs Thrive with Memory?
- Memory-Augmented Large Language Models (MA-LLM)
- Working Memory
- String Memory: Enabling LLMs to Simulate Universal Turing Machines
- Neural Memory: Long-term Storage and Generalization Power
- Episodic Memory
- Rapid Knowledge Integration with Differentiable Memory
- Prompt Optimization with Nearest Neighbor Memory
- The Future of Memory
What are Memory-Augmented Neural Networks?
Memory is the essence of intelligence. Thanks to memory, humans can recognize objects, recall events, plan, explain, and reason. It allows us to learn continuously, adapt to new environments, and apply past knowledge to unfamiliar situations. For AI, memory could be equally transformative. In neural networks, memory enables more than just storing patterns—it provides a way to connect past experiences to current tasks, to adapt across contexts, and to hold knowledge over time [1].
Check our papers: