Computer Vision Self-supervised Learning Methods on Time Series

Abstract

Self-supervised learning (SSL) has had great success in both computer vision and natural language processing. These approaches often rely on cleverly crafted loss functions and training setups to avoid feature collapse. In this study, the effectiveness of mainstream SSL frameworks from computer vision and some SSL frameworks for time series are evaluated on the UCR, UEA and PTB-XL datasets, and we show that computer vision SSL frameworks can be effective for time series. In addition, we propose a new method that improves on the recently proposed VICReg method. Our method improves on a \textit{covariance} term proposed in VICReg, and in addition we augment the head of the architecture by an IterNorm layer that accelerates the convergence of the model

    Similar works

    Full text

    thumbnail-image

    Available Versions