Memory in Autonomous Agents: From Reinforcement Learning to Large Language Models

Date:

EventSlide

Abstract: AI agents learn to make decisions by interacting with environments to achieve goals. Effective learning requires remembering and leveraging past experiences, yet many current AI agents lack efficient memory, leading to slow and resource-intensive training. This challenge is especially pronounced in complex or uncertain environments, where partial observability, noise, and long-term dependencies make learning and decision-making difficult. In this talk, we explore how memory mechanisms have evolved from classical reinforcement learning agents to modern large language model agents, enabling faster learning, richer context representation, and more strategic exploration. Understanding this progression provides valuable insights for designing more capable, adaptive, and sample-efficient AI agents that can better handle real-world challenges.

Bio: Dr. Hung Le is an ARC DECRA Fellow and a Lecturer at Deakin University, specializing in deep reinforcement learning. He leads PhD research in machine learning, reinforcement learning and large language models at the Applied Artificial Intelligence Initiative (A2I2). His work focuses on developing neural memory-augmented agents, with applications in health, robotics, dialogue systems, and natural language processing. Dr. Le regularly publishes in top AI venues such as NeurIPS, ICLR and ICML.