Medical time series data are indispensable in healthcare, providing critical
insights for disease diagnosis, treatment planning, and patient management. The
exponential growth in data complexity, driven by advanced sensor technologies,
has presented challenges related to data labeling. Self-supervised learning
(SSL) has emerged as a transformative approach to address these challenges,
eliminating the need for extensive human annotation. In this study, we
introduce a novel framework for Medical Time Series Representation Learning,
known as MTS-LOF. MTS-LOF leverages the strengths of contrastive learning and
Masked Autoencoder (MAE) methods, offering a unique approach to representation
learning for medical time series data. By combining these techniques, MTS-LOF
enhances the potential of healthcare applications by providing more
sophisticated, context-rich representations. Additionally, MTS-LOF employs a
multi-masking strategy to facilitate occlusion-invariant feature learning. This
approach allows the model to create multiple views of the data by masking
portions of it. By minimizing the discrepancy between the representations of
these masked patches and the fully visible patches, MTS-LOF learns to capture
rich contextual information within medical time series datasets. The results of
experiments conducted on diverse medical time series datasets demonstrate the
superiority of MTS-LOF over other methods. These findings hold promise for
significantly enhancing healthcare applications by improving representation
learning. Furthermore, our work delves into the integration of joint-embedding
SSL and MAE techniques, shedding light on the intricate interplay between
temporal and structural dependencies in healthcare data. This understanding is
crucial, as it allows us to grasp the complexities of healthcare data analysis