The Best of Time-Series Forecasting (Part II): Advancements in Time Series Modeling Through Large Language Models

less than 1 minute read

Published:

Part 1 of my blog looked at how time-series forecasting has evolved—from traditional models like ARIMA to deep learning methods like Transformers. These approaches brought big improvements, especially in handling complex and long-range patterns. However, they also have limits, especially when it comes to adapting to new data or working well across very different domains.

Now, a new wave of models is entering the scene: Large Language Models (LLMs). These models were originally built for language tasks—like chatbots, summarizing text, and answering questions. But recently, researchers have started using LLMs for time-series forecasting, too.

In this post, we’ll explore:

✔ How LLMs are being adapted to handle time-series data

✔ Some recent research and early results

✔ Key challenges and open questions

LLMs won’t replace every forecasting model, but they’re opening up new ideas about how we can approach time-series problems. Let’s take a look at where this is all heading.

LLM Time Series

Read the full article