Loading Now

Time Series Forecasting: Navigating Non-Stationarity, Enhancing Interpretability, and Scaling with LLMs

Latest 50 papers on time series forecasting: Dec. 21, 2025

Time series forecasting is at the forefront of AI/ML research, driven by its critical role in everything from economic predictions and smart city management to personalized healthcare. The inherent complexities of temporal data—non-stationarity, intricate inter-dependencies, and the ever-present challenge of ‘black box’ models—present formidable hurdles. However, recent breakthroughs, as illuminated by a collection of cutting-edge papers, are revolutionizing how we approach these challenges, pushing the boundaries of accuracy, efficiency, and interpretability.

The Big Idea(s) & Core Innovations

The overarching theme in recent research is the quest for more robust, adaptive, and understandable forecasting models. A major push is seen in addressing non-stationarity and concept drift. Researchers from Shanghai Jiao Tong University, Lifan Zhao and Yanyan Shen, in their paper “Proactive Model Adaptation Against Concept Drift for Online Time Series Forecasting”, introduce Proceed, a framework that proactively adapts models by estimating and translating concept drift into parameter adjustments. This directly tackles the feedback delay issue inherent in online forecasting. Similarly, Mert Sonmezer and Seyda Ertekin from Middle East Technical University present CANet in “CANet: ChronoAdaptive Network for Enhanced Long-Term Time Series Forecasting under Non-Stationarity”, which uses a Non-stationary Adaptive Normalization module to preserve temporal dynamics while adapting to statistical changes, effectively combating over-stationarization.

The integration of Large Language Models (LLMs) is another transformative trend. “Conversational Time Series Foundation Models: Towards Explainable and Effective Forecasting” by Defu Cao and collaborators from the University of Southern California and Amazon AWS, introduces TSOrchestr. This innovative framework uses LLMs as intelligent judges to coordinate ensembles of time series models, combining interpretability with numerical precision through SHAP-based finetuning. Expanding on this, “FiCoTS: Fine-to-Coarse LLM-Enhanced Hierarchical Cross-Modality Interaction for Time Series Forecasting” by Yafei Lyu et al. proposes a fine-to-coarse LLM-enhanced hierarchical cross-modality interaction framework, leveraging LLMs to filter noise and align text tokens with time series patches. Furthermore, “STELLA: Guiding Large Language Models for Time Series Forecasting with Semantic Abstractions” by J. Fan et al. and “Can Slow-Thinking LLMs Reason Over Time? Empirical Studies in Time Series Forecasting” by Mingyue Cheng et al. explore how LLMs, especially ‘slow-thinking’ ones, can perform time series forecasting as a conditional reasoning task by integrating structured semantic information or multi-step reasoning. This not only enhances performance but also brings a new level of interpretability to a historically opaque field.

Efficiency and scalability for long-term forecasting are also paramount. “DPWMixer: Dual-Path Wavelet Mixer for Long-Term Time Series Forecasting” by Qianyang Li and colleagues from Xi’an Jiaotong University introduces a Haar wavelet decomposition and dual-path modeling to efficiently disentangle trends and details with linear time complexity. Qingyuan Yang et al. from Northeastern University tackle this with “FRWKV: Frequency-Domain Linear Attention for Long-Term Time Series Forecasting”, achieving linear complexity by combining frequency-domain analysis with linear attention, outperforming traditional Transformers at long horizons. For resource-constrained scenarios, “TimeDistill: Efficient Long-Term Time Series Forecasting with MLP via Cross-Architecture Distillation” by Juntong Ni et al. from Emory University shows that knowledge distillation can enable lightweight MLP models to surpass even their complex teacher models in accuracy and efficiency. Even simpler models are making a comeback, as Ruslan Gokhman (Yeshiva University) reveals in “UrbanAI 2025 Challenge: Linear vs Transformer Models for Long-Horizon Exogenous Temperature Forecasting”, demonstrating that well-designed linear models can outperform complex Transformer-family architectures in long-horizon exogenous-only temperature forecasting.

Multimodal and domain-specific challenges are also being addressed. “Adaptive Information Routing for Multimodal Time Series Forecasting” by Jun Seo et al. from LG AI Research introduces AIR, a framework that dynamically integrates textual information refined by LLMs into time series models. For specific applications, “Cross-Sample Augmented Test-Time Adaptation for Personalized Intraoperative Hypotension Prediction” by Kanxue Li et al. from Wuhan University proposes CSA-TTA to improve personalized intraoperative hypotension prediction by leveraging cross-sample augmentation, addressing rare event challenges in medical data.

Under the Hood: Models, Datasets, & Benchmarks

The recent advancements highlight a shift towards hybrid architectures, leveraging the strengths of different modeling paradigms, alongside an increased focus on robust data handling and evaluation.

Impact & The Road Ahead

These advancements herald a new era for time series forecasting, making it more robust, interpretable, and efficient across diverse domains. From critical applications like personalized medicine, where CSA-TTA and guideline-based LLMs for sepsis prediction by Michael Staniek et al. (Heidelberg University, Google DeepMind) offer life-saving potential, to optimizing energy grids with causal feature selection in residential load forecasting, the real-world impact is immense. In finance, the re-evaluation of Time Series Foundation Models (TSFMs) by Eghbal Rahimikia et al. (University of Manchester, UCL) emphasizes domain-specific pre-training, paving the way for more accurate financial predictions. The accessibility provided by platforms like Forecaster by Aaron D. Mullen et al. (University of Kentucky) will democratize advanced forecasting, enabling clinicians and domain experts to leverage these tools without extensive technical expertise. Furthermore, techniques like speculative decoding for accelerating TSFMs, introduced by Pranav Subbaraman et al. (UCLA), promise to make large models viable for latency-sensitive applications.

The road ahead involves further refinement of LLM integration, exploring more sophisticated hybrid architectures, and developing universally robust evaluation protocols. The challenge of balancing model complexity with interpretability and efficiency remains, but with these groundbreaking developments, we are closer than ever to truly intelligent and trustworthy time series forecasting systems.

Share this content:

Spread the love

Discover more from SciPapermill

Subscribe to get the latest posts sent to your email.

Post Comment

Discover more from SciPapermill

Subscribe now to keep reading and get access to the full archive.

Continue reading