148,417 research outputs found
Converting Your Thoughts to Texts: Enabling Brain Typing via Deep Feature Learning of EEG Signals
An electroencephalography (EEG) based Brain Computer Interface (BCI) enables
people to communicate with the outside world by interpreting the EEG signals of
their brains to interact with devices such as wheelchairs and intelligent
robots. More specifically, motor imagery EEG (MI-EEG), which reflects a
subjects active intent, is attracting increasing attention for a variety of BCI
applications. Accurate classification of MI-EEG signals while essential for
effective operation of BCI systems, is challenging due to the significant noise
inherent in the signals and the lack of informative correlation between the
signals and brain activities. In this paper, we propose a novel deep neural
network based learning framework that affords perceptive insights into the
relationship between the MI-EEG data and brain activities. We design a joint
convolutional recurrent neural network that simultaneously learns robust
high-level feature presentations through low-dimensional dense embeddings from
raw MI-EEG signals. We also employ an Autoencoder layer to eliminate various
artifacts such as background activities. The proposed approach has been
evaluated extensively on a large- scale public MI-EEG dataset and a limited but
easy-to-deploy dataset collected in our lab. The results show that our approach
outperforms a series of baselines and the competitive state-of-the- art
methods, yielding a classification accuracy of 95.53%. The applicability of our
proposed approach is further demonstrated with a practical BCI system for
typing.Comment: 10 page
Understanding and Improving Recurrent Networks for Human Activity Recognition by Continuous Attention
Deep neural networks, including recurrent networks, have been successfully
applied to human activity recognition. Unfortunately, the final representation
learned by recurrent networks might encode some noise (irrelevant signal
components, unimportant sensor modalities, etc.). Besides, it is difficult to
interpret the recurrent networks to gain insight into the models' behavior. To
address these issues, we propose two attention models for human activity
recognition: temporal attention and sensor attention. These two mechanisms
adaptively focus on important signals and sensor modalities. To further improve
the understandability and mean F1 score, we add continuity constraints,
considering that continuous sensor signals are more robust than discrete ones.
We evaluate the approaches on three datasets and obtain state-of-the-art
results. Furthermore, qualitative analysis shows that the attention learned by
the models agree well with human intuition.Comment: 8 pages. published in The International Symposium on Wearable
Computers (ISWC) 201
Atrial signal extraction in atrial fibrillation ECGs exploiting spatial constraints
International audienceThe accuracy in the extraction of the atrial activity (AA) from electrocardiogram (ECG) signals recorded during atrial fibrillation (AF) episodes plays an important role in the analysis and characterization of atrial arrhhythmias. The present contribution puts forward a new method for AA signal automatic extraction based on a blind source separation (BSS) formulation that exploits spatial information about the AA during the T-Q segments. This prior knowledge is used to optimize the spectral content of the AA signal estimated by BSS on the full ECG recording. The comparative performance of the method is evaluated on real data recorded from AF sufferers. The AA extraction quality of the proposed technique is comparable to that of previous algorithms, but is achieved at a reduced cost and without manual selection of parameters
Time-Contrastive Learning Based Deep Bottleneck Features for Text-Dependent Speaker Verification
There are a number of studies about extraction of bottleneck (BN) features
from deep neural networks (DNNs)trained to discriminate speakers, pass-phrases
and triphone states for improving the performance of text-dependent speaker
verification (TD-SV). However, a moderate success has been achieved. A recent
study [1] presented a time contrastive learning (TCL) concept to explore the
non-stationarity of brain signals for classification of brain states. Speech
signals have similar non-stationarity property, and TCL further has the
advantage of having no need for labeled data. We therefore present a TCL based
BN feature extraction method. The method uniformly partitions each speech
utterance in a training dataset into a predefined number of multi-frame
segments. Each segment in an utterance corresponds to one class, and class
labels are shared across utterances. DNNs are then trained to discriminate all
speech frames among the classes to exploit the temporal structure of speech. In
addition, we propose a segment-based unsupervised clustering algorithm to
re-assign class labels to the segments. TD-SV experiments were conducted on the
RedDots challenge database. The TCL-DNNs were trained using speech data of
fixed pass-phrases that were excluded from the TD-SV evaluation set, so the
learned features can be considered phrase-independent. We compare the
performance of the proposed TCL bottleneck (BN) feature with those of
short-time cepstral features and BN features extracted from DNNs discriminating
speakers, pass-phrases, speaker+pass-phrase, as well as monophones whose labels
and boundaries are generated by three different automatic speech recognition
(ASR) systems. Experimental results show that the proposed TCL-BN outperforms
cepstral features and speaker+pass-phrase discriminant BN features, and its
performance is on par with those of ASR derived BN features. Moreover,....Comment: Copyright (c) 2019 IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other work
- …