7 research outputs found
Ti-MAE: Self-Supervised Masked Time Series Autoencoders
Multivariate Time Series forecasting has been an increasingly popular topic
in various applications and scenarios. Recently, contrastive learning and
Transformer-based models have achieved good performance in many long-term
series forecasting tasks. However, there are still several issues in existing
methods. First, the training paradigm of contrastive learning and downstream
prediction tasks are inconsistent, leading to inaccurate prediction results.
Second, existing Transformer-based models which resort to similar patterns in
historical time series data for predicting future values generally induce
severe distribution shift problems, and do not fully leverage the sequence
information compared to self-supervised methods. To address these issues, we
propose a novel framework named Ti-MAE, in which the input time series are
assumed to follow an integrate distribution. In detail, Ti-MAE randomly masks
out embedded time series data and learns an autoencoder to reconstruct them at
the point-level. Ti-MAE adopts mask modeling (rather than contrastive learning)
as the auxiliary task and bridges the connection between existing
representation learning and generative Transformer-based methods, reducing the
difference between upstream and downstream forecasting tasks while maintaining
the utilization of original time series data. Experiments on several public
real-world datasets demonstrate that our framework of masked autoencoding could
learn strong representations directly from the raw data, yielding better
performance in time series forecasting and classification tasks.Comment: 20 pages, 7 figure
Exploiting Counter-Examples for Active Learning with Partial labels
This paper studies a new problem, \emph{active learning with partial labels}
(ALPL). In this setting, an oracle annotates the query samples with partial
labels, relaxing the oracle from the demanding accurate labeling process. To
address ALPL, we first build an intuitive baseline that can be seamlessly
incorporated into existing AL frameworks. Though effective, this baseline is
still susceptible to the \emph{overfitting}, and falls short of the
representative partial-label-based samples during the query process. Drawing
inspiration from human inference in cognitive science, where accurate
inferences can be explicitly derived from \emph{counter-examples} (CEs), our
objective is to leverage this human-like learning pattern to tackle the
\emph{overfitting} while enhancing the process of selecting representative
samples in ALPL. Specifically, we construct CEs by reversing the partial labels
for each instance, and then we propose a simple but effective WorseNet to
directly learn from this complementary pattern. By leveraging the distribution
gap between WorseNet and the predictor, this adversarial evaluation manner
could enhance both the performance of the predictor itself and the sample
selection process, allowing the predictor to capture more accurate patterns in
the data. Experimental results on five real-world datasets and four benchmark
datasets show that our proposed method achieves comprehensive improvements over
ten representative AL frameworks, highlighting the superiority of WorseNet. The
source code will be available at \url{https://github.com/Ferenas/APLL}.Comment: 29 pages, Under revie