You May Also Enjoy
Rethinking Memory: A Unified Linear Approach for Mindful Agents
less than 1 minute read
Published:
In reinforcement learning (RL), memory isn’t just a bonus—it’s a necessity. When agents operate in environments where they can’t directly see everything they need (think navigating a maze), they must rely on memory to make decisions. This is where things get tricky: most current memory models fail under the weight of complex, long-term tasks where agents must selectively retain and erase memories based on relevance. Read more
Many Hands Make Light Work: Leveraging Collective Intelligence to Align Large Language Models
1 minute read
Published:
Multi-Reference Preference Optimization (MRPO) for Large Language Models (AAAI 2025) Read more
Memory-Augmented Large Language Models
1 minute read
Published:
Why and How Memory Matters for LLMs? Read more
Extending Neural Networks to New Lengths: Enhancing Symbol Processing and Generalization
1 minute read
Published:
Plug, Play, and Generalize: Length Extrapolation with Pointer-Augmented Neural Memory Read more