3 research outputs found
A Knowledge Distillation Framework For Enhancing Ear-EEG Based Sleep Staging With Scalp-EEG Data
Sleep plays a crucial role in the well-being of human lives. Traditional
sleep studies using Polysomnography are associated with discomfort and often
lower sleep quality caused by the acquisition setup. Previous works have
focused on developing less obtrusive methods to conduct high-quality sleep
studies, and ear-EEG is among popular alternatives. However, the performance of
sleep staging based on ear-EEG is still inferior to scalp-EEG based sleep
staging. In order to address the performance gap between scalp-EEG and ear-EEG
based sleep staging, we propose a cross-modal knowledge distillation strategy,
which is a domain adaptation approach. Our experiments and analysis validate
the effectiveness of the proposed approach with existing architectures, where
it enhances the accuracy of the ear-EEG based sleep staging by 3.46% and
Cohen's kappa coefficient by a margin of 0.038.Comment: Code available at :
https://github.com/Mithunjha/EarEEG_KnowledgeDistillatio
Towards Interpretable Sleep Stage Classification Using Cross-Modal Transformers
Accurate sleep stage classification is significant for sleep health
assessment. In recent years, several machine-learning based sleep staging
algorithms have been developed, and in particular, deep-learning based
algorithms have achieved performance on par with human annotation. Despite the
improved performance, a limitation of most deep-learning based algorithms is
their black-box behavior, which has limited their use in clinical settings.
Here, we propose a cross-modal transformer, which is a transformer-based method
for sleep stage classification. The proposed cross-modal transformer consists
of a novel cross-modal transformer encoder architecture along with a
multi-scale one-dimensional convolutional neural network for automatic
representation learning. Our method outperforms the state-of-the-art methods
and eliminates the black-box behavior of deep-learning models by utilizing the
interpretability aspect of the attention modules. Furthermore, our method
provides considerable reductions in the number of parameters and training time
compared to the state-of-the-art methods. Our code is available at
https://github.com/Jathurshan0330/Cross-Modal-Transformer.Comment: 11 pages, 7 figures, 6 table
Improving the attenuation of moving interfering objects in videos using shifted-velocity filtering
Abstract
Three-dimensional space-time velocity filters may be used to enhance dynamic passband objects of interest in videos while attenuating moving interfering objects based on their velocities. In this paper, we show that the attenuation of interfering stopband objects may be significantly improved using recently proposed shifted-velocity filters. It is shown that an improvement of approximately 20 dB in signal-to-interference ratio may be achieved for stopband to passband velocity differences of only 1 pixels/frame. More importantly, this improvement is achieved without increasing the computational complexity