12,519 research outputs found
SeizureNet: Multi-Spectral Deep Feature Learning for Seizure Type Classification
Automatic classification of epileptic seizure types in electroencephalograms
(EEGs) data can enable more precise diagnosis and efficient management of the
disease. This task is challenging due to factors such as low signal-to-noise
ratios, signal artefacts, high variance in seizure semiology among epileptic
patients, and limited availability of clinical data. To overcome these
challenges, in this paper, we present SeizureNet, a deep learning framework
which learns multi-spectral feature embeddings using an ensemble architecture
for cross-patient seizure type classification. We used the recently released
TUH EEG Seizure Corpus (V1.4.0 and V1.5.2) to evaluate the performance of
SeizureNet. Experiments show that SeizureNet can reach a weighted F1 score of
up to 0.94 for seizure-wise cross validation and 0.59 for patient-wise cross
validation for scalp EEG based multi-class seizure type classification. We also
show that the high-level feature embeddings learnt by SeizureNet considerably
improve the accuracy of smaller networks through knowledge distillation for
applications with low-memory constraints
Dynamical Component Analysis (DyCA) and its application on epileptic EEG
Dynamical Component Analysis (DyCA) is a recently-proposed method to detect
projection vectors to reduce the dimensionality of multi-variate deterministic
datasets. It is based on the solution of a generalized eigenvalue problem and
therefore straight forward to implement. DyCA is introduced and applied to EEG
data of epileptic seizures. The obtained eigenvectors are used to project the
signal and the corresponding trajectories in phase space are compared with PCA
and ICA-projections. The eigenvalues of DyCA are utilized for seizure detection
and the obtained results in terms of specificity, false discovery rate and miss
rate are compared to other seizure detection algorithms.Comment: 5 pages, 4 figures, accepted for IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP) 201
Cross-Modal Data Programming Enables Rapid Medical Machine Learning
Labeling training datasets has become a key barrier to building medical
machine learning models. One strategy is to generate training labels
programmatically, for example by applying natural language processing pipelines
to text reports associated with imaging studies. We propose cross-modal data
programming, which generalizes this intuitive strategy in a
theoretically-grounded way that enables simpler, clinician-driven input,
reduces required labeling time, and improves with additional unlabeled data. In
this approach, clinicians generate training labels for models defined over a
target modality (e.g. images or time series) by writing rules over an auxiliary
modality (e.g. text reports). The resulting technical challenge consists of
estimating the accuracies and correlations of these rules; we extend a recent
unsupervised generative modeling technique to handle this cross-modal setting
in a provably consistent way. Across four applications in radiography, computed
tomography, and electroencephalography, and using only several hours of
clinician time, our approach matches or exceeds the efficacy of
physician-months of hand-labeling with statistical significance, demonstrating
a fundamentally faster and more flexible way of building machine learning
models in medicine
- …