You May Also Enjoy
The Best of Time-Series Forecasting (Part II): Advancements in Time Series Modeling Through Large Language Models
less than 1 minute read
Published:
Part 1 of my blog looked at how time-series forecasting has evolved—from traditional models like ARIMA to deep learning methods like Transformers. These approaches brought big improvements, especially in handling complex and long-range patterns. However, they also have limits, especially when it comes to adapting to new data or working well across very different domains. Read more
The Best of Time-Series Forecasting (Part I): From Seasonal Patterns to Transformer Models
less than 1 minute read
Published:
From finance to healthcare, energy, and climate science, time-series forecasting is a cornerstone of critical decision-making: Read more
Rethinking Memory: A Unified Linear Approach for Mindful Agents
less than 1 minute read
Published:
In reinforcement learning (RL), memory isn’t just a bonus—it’s a necessity. When agents operate in environments where they can’t directly see everything they need (think navigating a maze), they must rely on memory to make decisions. This is where things get tricky: most current memory models fail under the weight of complex, long-term tasks where agents must selectively retain and erase memories based on relevance. Read more
Many Hands Make Light Work: Leveraging Collective Intelligence to Align Large Language Models
1 minute read
Published:
Multi-Reference Preference Optimization (MRPO) for Large Language Models (AAAI 2025) Read more