Loading Now

Time Series Forecasting Takes a Quantum Leap: From Adaptive Models to Human-LLM Collaboration

Latest 50 papers on time series forecasting: Nov. 23, 2025

Time series forecasting, the art and science of predicting future data points based on historical observations, is a cornerstone of decision-making across nearly every industry—from finance and energy to healthcare and climate science. However, the inherent complexities of real-world data, such as non-stationarity, missing values, and the subtle interplay of various factors, pose persistent challenges to even the most advanced AI/ML models. This digest dives into recent breakthroughs that are pushing the boundaries of what’s possible, exploring novel architectures, adaptive mechanisms, and even the integration of human intelligence with large language models.

The Big Idea(s) & Core Innovations

Recent research is fundamentally rethinking how time series models perceive and process information. A significant theme is the development of adaptive and robust frameworks that move beyond static approaches. For instance, the Adapformer introduced by Yuchen Luo and colleagues from the University of Melbourne in their paper “Adapformer: Adaptive Channel Management for Multivariate Time Series Forecasting” cleverly balances channel-independent and channel-dependent strategies to improve multivariate time series forecasting. Similarly, DTAF, presented in “Towards Non-Stationary Time Series Forecasting with Temporal Stabilization and Frequency Differencing” by Junkai Lu and the team at East China Normal University, tackles non-stationarity by combining temporal stabilization with frequency differencing, capturing subtle shifts in both time and frequency domains.

The push for interpretability and reliability is also paramount. “Counterfactual Explanation for Multivariate Time Series Forecasting with Exogenous Variables” by Keita Kinjo from Kyoritsu Women’s University explores how counterfactual explanations can enhance model transparency by revealing the influence of exogenous variables on predictions. This complements work like “Towards Explainable and Reliable AI in Finance” by Albi Isufaj and colleagues from the National Institute of Informatics, which introduces Time-LLM for prompt-based reasoning and reliability estimators for ‘corrective AI’ in financial forecasting.

Innovation extends to core architectural enhancements and novel learning paradigms. Soroush Omranpour and his team from Mila introduce Higher-Order Transformers (HOT) in “Higher-Order Transformers With Kronecker-Structured Attention” to efficiently model multiway tensor data using Kronecker factorization, significantly reducing computational costs. Meanwhile, “Naga: Vedic Encoding for Deep State Space Models” by Melanie Schaller and colleagues at Leibniz University Hannover showcases Naga, a deep state space model inspired by Vedic mathematics, using bidirectional inputs to enhance temporal dependency capture.

Perhaps one of the most exciting new frontiers is the integration of human and machine intelligence. The AlphaCast framework, detailed in “AlphaCast: A Human Wisdom-LLM Intelligence Co-Reasoning Framework for Interactive Time Series Forecasting” by Xiaohan Zhang and the University of Science and Technology of China team, redefines forecasting as an interactive process, combining human domain knowledge with LLM contextual reasoning to achieve superior accuracy and interpretability.

Under the Hood: Models, Datasets, & Benchmarks

These advancements are powered by ingenious models, sophisticated data handling, and robust evaluation benchmarks. Here’s a closer look:

  • Foundational Models & Architectures:
    • TOTO: A 151-million parameter zero-shot time series forecasting foundation model optimized for observability data, introduced in “This Time is Different: An Observability Perspective on Time Series Foundation Models” by Ben Cohen and Datadog AI Research. It leverages causal scaling and attention mechanisms.
    • TiRex: A pre-trained xLSTM-based model for zero-shot forecasting across long and short horizons, utilizing Contiguous Patch Masking (CPM) for enhanced in-context learning, from Andreas Auer and colleagues at NXAI GmbH in “TiRex: Zero-Shot Forecasting Across Long and Short Horizons with Enhanced In-Context Learning”.
    • SST: A Multi-Scale Hybrid Mamba-Transformer Experts architecture by Xiongxiao Xu and team from Illinois Institute of Technology, designed to overcome information interference by decomposing time series into long-range patterns (Mamba) and short-range variations (Transformer). (Code: https://github.com/XiongxiaoXu/SST)
    • HYDRA: A dual-memory architecture that uses EGD-MEMORY for multivariate time series analysis, capturing both temporal and variate dependencies, by Asal Meskin and researchers from Sharif University of Technology. (Paper: https://arxiv.org/pdf/2511.00989)
    • IMTS-Mixer: A novel MLP-based architecture for irregularly sampled multivariate time series with missing values, featuring ISCAM for channel-wise encoding and ConTP for continuous-time forecasting, presented by Christian Klotzegens and the University of Hildesheim team. (Paper: https://arxiv.org/pdf/2502.11816)
    • AWEMixer: An adaptive wavelet-enhanced mixer network that combines wavelet transforms with mixer architectures for improved long-term forecasting. (Code: https://github.com/hit636/AWEMixer)
    • StochDiff: The first diffusion-based model to integrate the diffusion process directly into time series modeling for highly stochastic data, from Yuansan Liu and University of Melbourne researchers. (Paper: https://arxiv.org/pdf/2406.02827)
    • EMAformer: Enhances the Transformer with “Embedding Armor” by introducing global stability, phase sensitivity, and cross-axis specificity as inductive biases for multivariate time series forecasting. (Code: https://github.com/PlanckChang/EMAformer)
    • MDMLP-EIA: A Multi-domain Dynamic MLP with Energy Invariant Attention that captures weak seasonal signals and ensures signal energy consistency during fusion for robust forecasting. (Code: https://github.com/zh1985csuccsu/MDMLP-EIA)
    • TempoPFN: A foundation model based on linear RNNs with GatedDeltaProduct recurrence, exclusively pre-trained on synthetic data for zero-shot forecasting. (Code: https://github.com/fla-org/flash-linear-attention)
    • ARIMA_PLUS: A large-scale, automatic, and interpretable in-database forecasting and anomaly detection framework for Google BigQuery. (Paper: https://arxiv.org/pdf/2510.24452)
  • Novel Loss Functions & Learning Strategies:
    • DBLoss: A Decomposition-Based Loss function that refines the characterization of time series by separately calculating losses for seasonal and trend components, as detailed by Xiangfei Qiu and the East China Normal University team. (Code: https://github.com/decisionintelligence/DBLoss)
    • RI-Loss: A learnable residual-informed loss function that explicitly models noise structure using the Hilbert-Schmidt Independence Criterion (HSIC), from Jieting Wang and colleagues at Shanxi University. (Paper: https://arxiv.org/pdf/2511.10130)
    • OCE-TS: Replaces Mean Squared Error (MSE) with Ordinal Cross-Entropy (OCE) for improved uncertainty quantification and robustness in probabilistic time series forecasting, proposed by Jieting Wang and team. (Paper: https://arxiv.org/pdf/2511.10200)
    • Selective Learning: A strategy that identifies and excludes non-generalizable timesteps during optimization to mitigate overfitting, presented by Yisong Fu and colleagues from the Chinese Academy of Sciences. (Code: https://github.com/GestaltCogTeam/selective-learning)
    • Repetitive Contrastive Learning (RCL): Enhances Mamba’s selectivity in time series prediction by using contrastive learning and sequence augmentation, proposed by Wenbo Yan and team from Peking University. (Paper: https://arxiv.org/pdf/2504.09185)
    • Self-Correction with Adaptive Mask (SCAM): A self-supervised labeling technique combined with Spectral Norm Regularization (SNR) to improve generalization in time series forecasting, introduced by Yuxuan Yang and colleagues from Zhejiang University. (Code: https://github.com/SuDIS-ZJU/SCAM)
  • Data Handling & Augmentation:
    • APT: An Affine Prototype-Timestamp plug-in module that enhances forecasting under distribution shift by dynamically generating affine parameters based on timestamp-conditioned prototype learning, from Yujie Li and the Chinese Academy of Sciences team. (Code: https://github.com/blisky-li/APT)
    • IMA: An Imputation-Based Mixup Augmentation technique for time series data, combining imputation with Mixup for enhanced generalization. (Code: https://github.com/dangnha/IMA)
    • CRIB: A novel direct-prediction approach based on the Information Bottleneck principle for multivariate time series forecasting with missing values, by Jie Yang and the University of Illinois at Chicago. (Code: https://github.com/Muyiiiii/CRIB)
    • ZOO-PCA: An embedding-space augmentation technique to prevent Membership Inference Attacks in clinical time series forecasting while preserving predictive performance, introduced by Marius Fracarolli and colleagues from Heidelberg University. (Code: https://github.com/MariusFracarolli/ML4H_2025)
  • Ensemble & Hybrid Approaches:
    • Multi-layer Stack Ensembles: An empirical study demonstrating that ensembling techniques significantly enhance predictive accuracy in time series forecasting, from Ali M. and co-authors. (Paper: https://arxiv.org/pdf/2511.15350)
    • Synapse: A dynamic arbitration framework that adaptively selects and weights multiple foundational models to improve prediction accuracy by adapting to changing patterns over time, introduced by Zhenyu Xu and a team including Google Research and Penn State University. (Paper: https://arxiv.org/pdf/2511.05460)
    • CaReTS: A multi-task framework unifying classification and regression for improved accuracy and interpretability by separating macro-level trends from micro-level deviations, from Fulong Yao and colleagues at Cardiff University. (Code: https://anonymous.4open.science/r/CaReTS-6A8F/README.md)
    • ForecastGAN: A decomposition-based adversarial framework that improves multi-horizon time series forecasting by integrating decomposition, model selection, and adversarial training. (Paper: https://arxiv.org/pdf/2511.04445)
  • Domain-Specific & Niche Applications:
    • LiteCast: A lightweight forecaster for carbon optimizations, predicting grid carbon intensity with minimal historical data, from Mathew Joseph and Google Research. (Code: https://github.com/AbelSouza/LiteCast)
    • DeltaLag: An end-to-end deep learning method that discovers dynamic lead-lag relationships in financial markets for enhanced portfolio construction, by Wanyun Zhou and the Hong Kong University of Science and Technology. (Code: https://github.com/hkust-gz/DeltaLag)
    • Multi-period Learning Framework (MLF): Enhances financial time series forecasting by integrating multi-period inputs, from Xu Zhang and Fudan University/Ant Group. (Code: https://github.com/Meteor-Stars/MLF)
  • Novel Paradigms & Tools:
    • Spectral Predictability (ℙ): A signal processing metric to efficiently select time series forecasting models, introduced by Oliver Wang and UCLA. (Paper: https://arxiv.org/pdf/2511.08884)
    • FreDN: A frequency-domain approach that addresses spectral entanglement and computational challenges through a learnable Frequency Disentangler and ReIm Block, from Zhongde An and Shanghai University of Finance and Economics. (Paper: https://arxiv.org/pdf/2511.11817)
    • CometNet: A contextual motif-guided network for long-term time series forecasting that leverages recurring patterns to overcome receptive field bottlenecks, from Weixu Wang and Tianjin University. (Paper: https://arxiv.org/pdf/2511.08049)
    • PFRP (Predicting the Future by Retrieving the Past): Enhances univariate time series forecasting by leveraging global historical patterns stored in a Global Memory Bank (GMB), from Dazhao Du and Hong Kong University of Science and Technology. (Code: https://github.com/ddz16/PFRP)
    • OneCast: A structured decomposition and modular generation framework for cross-domain time series forecasting, from Tingyue Pan and the University of Science and Technology of China. (Code: https://github.com/pty12345/OneCast)
    • DMMV: A framework leveraging large vision models (LVMs) and adaptive decomposition to integrate multi-modal views for long-term time series forecasting, by ChengAo Shen and the University of Houston/NEC Laboratories America. (Code: https://github.com/D2I-Group/dmmv)

Impact & The Road Ahead

The collective impact of this research is profound, ushering in an era of more intelligent, robust, and interpretable time series forecasting. The shift towards adaptive, multi-modal, and hybrid models capable of discerning intricate patterns and handling real-world complexities like non-stationarity and missing data marks a significant leap. The development of specialized loss functions and augmentation techniques further fine-tunes models for specific challenges, demonstrating that even subtle changes can yield substantial improvements.

Looking ahead, the integration of human expertise with large language models, as seen with AlphaCast, opens up fascinating possibilities for more nuanced and context-aware predictions, especially in high-stakes domains like finance. The emphasis on explainability and reliability will be crucial for building trust and enabling widespread adoption of AI in critical decision-making processes. Furthermore, the creation of domain-specific foundation models and benchmarks, like TOTO and BOOM for observability data, points towards a future where highly specialized AI can tackle bespoke industry challenges with unprecedented precision.

The field is rapidly evolving, moving beyond monolithic, black-box models to a mosaic of adaptive, interpretable, and collaborative systems. These advancements promise not only more accurate forecasts but also a deeper understanding of the temporal dynamics that shape our world, empowering us to make smarter, more informed decisions across every sector.

Share this content:

Spread the love

Discover more from SciPapermill

Subscribe to get the latest posts sent to your email.

Post Comment

Discover more from SciPapermill

Subscribe now to keep reading and get access to the full archive.

Continue reading