Multivariate Time Series forecasting has been an increasingly popular topic
in various applications and scenarios. Recently, contrastive learning and
Transformer-based models have achieved good performance in many long-term
series forecasting tasks. However, there are still several issues in existing
methods. First, the training paradigm of contrastive learning and downstream
prediction tasks are inconsistent, leading to inaccurate prediction results.
Second, existing Transformer-based models which resort to similar patterns in
historical time series data for predicting future values generally induce
severe distribution shift problems, and do not fully leverage the sequence
information compared to self-supervised methods. To address these issues, we
propose a novel framework named Ti-MAE, in which the input time series are
assumed to follow an integrate distribution. In detail, Ti-MAE randomly masks
out embedded time series data and learns an autoencoder to reconstruct them at
the point-level. Ti-MAE adopts mask modeling (rather than contrastive learning)
as the auxiliary task and bridges the connection between existing
representation learning and generative Transformer-based methods, reducing the
difference between upstream and downstream forecasting tasks while maintaining
the utilization of original time series data. Experiments on several public
real-world datasets demonstrate that our framework of masked autoencoding could
learn strong representations directly from the raw data, yielding better
performance in time series forecasting and classification tasks.Comment: 20 pages, 7 figure