Human Activity Recognition (HAR) systems have been extensively studied by the
vision and ubiquitous computing communities due to their practical applications
in daily life, such as smart homes, surveillance, and health monitoring.
Typically, this process is supervised in nature and the development of such
systems requires access to large quantities of annotated data.
However, the higher costs and challenges associated with obtaining good
quality annotations have rendered the application of self-supervised methods an
attractive option and contrastive learning comprises one such method.
However, a major component of successful contrastive learning is the
selection of good positive and negative samples.
Although positive samples are directly obtainable, sampling good negative
samples remain a challenge.
As human activities can be recorded by several modalities like camera and IMU
sensors, we propose a hard negative sampling method for multimodal HAR with a
hard negative sampling loss for skeleton and IMU data pairs.
We exploit hard negatives that have different labels from the anchor but are
projected nearby in the latent space using an adjustable concentration
parameter.
Through extensive experiments on two benchmark datasets: UTD-MHAD and MMAct,
we demonstrate the robustness of our approach forlearning strong feature
representation for HAR tasks, and on the limited data setting.
We further show that our model outperforms all other state-of-the-art methods
for UTD-MHAD dataset, and self-supervised methods for MMAct: Cross session,
even when uni-modal data are used during downstream activity recognition