49 research outputs found
Self-supervised Learning for ECG-based Emotion Recognition
We present an electrocardiogram (ECG) -based emotion recognition system using
self-supervised learning. Our proposed architecture consists of two main
networks, a signal transformation recognition network and an emotion
recognition network. First, unlabelled data are used to successfully train the
former network to detect specific pre-determined signal transformations in the
self-supervised learning step. Next, the weights of the convolutional layers of
this network are transferred to the emotion recognition network, and two dense
layers are trained in order to classify arousal and valence scores. We show
that our self-supervised approach helps the model learn the ECG feature
manifold required for emotion recognition, performing equal or better than the
fully-supervised version of the model. Our proposed method outperforms the
state-of-the-art in ECG-based emotion recognition with two publicly available
datasets, SWELL and AMIGOS. Further analysis highlights the advantage of our
self-supervised approach in requiring significantly less data to achieve
acceptable results.Comment: Accepted, 45th IEEE International Conference on Acoustics, Speech,
and Signal Processin
Self-Supervised Representation Learning for Detection of ACL Tear Injury in Knee MR Videos
The success of deep learning based models for computer vision applications
requires large scale human annotated data which are often expensive to
generate. Self-supervised learning, a subset of unsupervised learning, handles
this problem by learning meaningful features from unlabeled image or video
data. In this paper, we propose a self-supervised learning approach to learn
transferable features from MR video clips by enforcing the model to learn
anatomical features. The pretext task models are designed to predict the
correct ordering of the jumbled image patches that the MR video frames are
divided into. To the best of our knowledge, none of the supervised learning
models performing injury classification task from MR video provide any
explanation for the decisions made by the models and hence makes our work the
first of its kind on MR video data. Experiments on the pretext task show that
this proposed approach enables the model to learn spatial context invariant
features which help for reliable and explainable performance in downstream
tasks like classification of Anterior Cruciate Ligament tear injury from knee
MRI. The efficiency of the novel Convolutional Neural Network proposed in this
paper is reflected in the experimental results obtained in the downstream task
SeCo: Exploring Sequence Supervision for Unsupervised Representation Learning
A steady momentum of innovations and breakthroughs has convincingly pushed
the limits of unsupervised image representation learning. Compared to static 2D
images, video has one more dimension (time). The inherent supervision existing
in such sequential structure offers a fertile ground for building unsupervised
learning models. In this paper, we compose a trilogy of exploring the basic and
generic supervision in the sequence from spatial, spatiotemporal and sequential
perspectives. We materialize the supervisory signals through determining
whether a pair of samples is from one frame or from one video, and whether a
triplet of samples is in the correct temporal order. We uniquely regard the
signals as the foundation in contrastive learning and derive a particular form
named Sequence Contrastive Learning (SeCo). SeCo shows superior results under
the linear protocol on action recognition (Kinetics), untrimmed activity
recognition (ActivityNet) and object tracking (OTB-100). More remarkably, SeCo
demonstrates considerable improvements over recent unsupervised pre-training
techniques, and leads the accuracy by 2.96% and 6.47% against fully-supervised
ImageNet pre-training in action recognition task on UCF101 and HMDB51,
respectively. Source code is available at
\url{https://github.com/YihengZhang-CV/SeCo-Sequence-Contrastive-Learning}.Comment: AAAI 2021; Code is publicly available at:
https://github.com/YihengZhang-CV/SeCo-Sequence-Contrastive-Learnin