620 research outputs found
A Scalable Automated Diagnostic Feature Extraction System for EEGs
Researchers using Electroencephalograms (“EEGs”) to diagnose clinical outcomes often run into computational complexity problems. In particular, extracting complex, sometimes nonlinear, features from a large number of time-series often require large amounts of processing time. In this paper we describe a distributed system that leverages modern cloud-based technologies and tools and demonstrate that it can effectively, and efficiently, undertake clinical research. Specifically we compare three types of clusters, showing their relative costs (in both time and money) to develop a distributed machine learning pipeline for predicting gestation time based on features extracted from these EEGs
Real-Time Non-Invasive Imaging and Detection of Spreading Depolarizations through EEG: An Ultra-Light Explainable Deep Learning Approach
A core aim of neurocritical care is to prevent secondary brain injury.
Spreading depolarizations (SDs) have been identified as an important
independent cause of secondary brain injury. SDs are usually detected using
invasive electrocorticography recorded at high sampling frequency. Recent pilot
studies suggest a possible utility of scalp electrodes generated
electroencephalogram (EEG) for non-invasive SD detection. However, noise and
attenuation of EEG signals makes this detection task extremely challenging.
Previous methods focus on detecting temporal power change of EEG over a fixed
high-density map of scalp electrodes, which is not always clinically feasible.
Having a specialized spectrogram as an input to the automatic SD detection
model, this study is the first to transform SD identification problem from a
detection task on a 1-D time-series wave to a task on a sequential 2-D rendered
imaging. This study presented a novel ultra-light-weight multi-modal
deep-learning network to fuse EEG spectrogram imaging and temporal power
vectors to enhance SD identification accuracy over each single electrode,
allowing flexible EEG map and paving the way for SD detection on
ultra-low-density EEG with variable electrode positioning. Our proposed model
has an ultra-fast processing speed (<0.3 sec). Compared to the conventional
methods (2 hours), this is a huge advancement towards early SD detection and to
facilitate instant brain injury prognosis. Seeing SDs with a new dimension -
frequency on spectrograms, we demonstrated that such additional dimension could
improve SD detection accuracy, providing preliminary evidence to support the
hypothesis that SDs may show implicit features over the frequency profile
Deep learning with convolutional neural networks for decoding and visualization of EEG pathology
We apply convolutional neural networks (ConvNets) to the task of
distinguishing pathological from normal EEG recordings in the Temple University
Hospital EEG Abnormal Corpus. We use two basic, shallow and deep ConvNet
architectures recently shown to decode task-related information from EEG at
least as well as established algorithms designed for this purpose. In decoding
EEG pathology, both ConvNets reached substantially better accuracies (about 6%
better, ~85% vs. ~79%) than the only published result for this dataset, and
were still better when using only 1 minute of each recording for training and
only six seconds of each recording for testing. We used automated methods to
optimize architectural hyperparameters and found intriguingly different ConvNet
architectures, e.g., with max pooling as the only nonlinearity. Visualizations
of the ConvNet decoding behavior showed that they used spectral power changes
in the delta (0-4 Hz) and theta (4-8 Hz) frequency range, possibly alongside
other features, consistent with expectations derived from spectral analysis of
the EEG data and from the textual medical reports. Analysis of the textual
medical reports also highlighted the potential for accuracy increases by
integrating contextual information, such as the age of subjects. In summary,
the ConvNets and visualization techniques used in this study constitute a next
step towards clinically useful automated EEG diagnosis and establish a new
baseline for future work on this topic.Comment: Published at IEEE SPMB 2017 https://www.ieeespmb.org/2017
Hardware Implementation of Deep Network Accelerators Towards Healthcare and Biomedical Applications
With the advent of dedicated Deep Learning (DL) accelerators and neuromorphic
processors, new opportunities are emerging for applying deep and Spiking Neural
Network (SNN) algorithms to healthcare and biomedical applications at the edge.
This can facilitate the advancement of the medical Internet of Things (IoT)
systems and Point of Care (PoC) devices. In this paper, we provide a tutorial
describing how various technologies ranging from emerging memristive devices,
to established Field Programmable Gate Arrays (FPGAs), and mature Complementary
Metal Oxide Semiconductor (CMOS) technology can be used to develop efficient DL
accelerators to solve a wide variety of diagnostic, pattern recognition, and
signal processing problems in healthcare. Furthermore, we explore how spiking
neuromorphic processors can complement their DL counterparts for processing
biomedical signals. After providing the required background, we unify the
sparsely distributed research on neural network and neuromorphic hardware
implementations as applied to the healthcare domain. In addition, we benchmark
various hardware platforms by performing a biomedical electromyography (EMG)
signal processing task and drawing comparisons among them in terms of inference
delay and energy. Finally, we provide our analysis of the field and share a
perspective on the advantages, disadvantages, challenges, and opportunities
that different accelerators and neuromorphic processors introduce to healthcare
and biomedical domains. This paper can serve a large audience, ranging from
nanoelectronics researchers, to biomedical and healthcare practitioners in
grasping the fundamental interplay between hardware, algorithms, and clinical
adoption of these tools, as we shed light on the future of deep networks and
spiking neuromorphic processing systems as proponents for driving biomedical
circuits and systems forward.Comment: Submitted to IEEE Transactions on Biomedical Circuits and Systems (21
pages, 10 figures, 5 tables
Classifying sleep-wake stages through recurrent neural networks using pulse oximetry signals
The regulation of the autonomic nervous system changes with the sleep stages
causing variations in the physiological variables. We exploit these changes
with the aim of classifying the sleep stages in awake or asleep using pulse
oximeter signals. We applied a recurrent neural network to heart rate and
peripheral oxygen saturation signals to classify the sleep stage every 30
seconds. The network architecture consists of two stacked layers of
bidirectional gated recurrent units (GRUs) and a softmax layer to classify the
output. In this paper, we used 5000 patients from the Sleep Heart Health Study
dataset. 2500 patients were used to train the network, and two subsets of 1250
were used to validate and test the trained models. In the test stage, the best
result obtained was 90.13% accuracy, 94.13% sensitivity, 80.26% specificity,
92.05% precision, and 84.68% negative predictive value. Further, the Cohen's
Kappa coefficient was 0.74 and the average absolute error percentage to the
actual sleep time was 8.9%. The performance of the proposed network is
comparable with the state-of-the-art algorithms when they use much more
informative signals (except those with EEG).Comment: 12 pages, 4 figures, 2 table
- …