81 research outputs found
One-shot Learning for iEEG Seizure Detection Using End-to-end Binary Operations: Local Binary Patterns with Hyperdimensional Computing
This paper presents an efficient binarized algorithm for both learning and
classification of human epileptic seizures from intracranial
electroencephalography (iEEG). The algorithm combines local binary patterns
with brain-inspired hyperdimensional computing to enable end-to-end learning
and inference with binary operations. The algorithm first transforms iEEG time
series from each electrode into local binary pattern codes. Then atomic
high-dimensional binary vectors are used to construct composite representations
of seizures across all electrodes. For the majority of our patients (10 out of
16), the algorithm quickly learns from one or two seizures (i.e., one-/few-shot
learning) and perfectly generalizes on 27 further seizures. For other patients,
the algorithm requires three to six seizures for learning. Overall, our
algorithm surpasses the state-of-the-art methods for detecting 65 novel
seizures with higher specificity and sensitivity, and lower memory footprint.Comment: Published as a conference paper at the IEEE BioCAS 201
An EMG Gesture Recognition System with Flexible High-Density Sensors and Brain-Inspired High-Dimensional Classifier
EMG-based gesture recognition shows promise for human-machine interaction.
Systems are often afflicted by signal and electrode variability which degrades
performance over time. We present an end-to-end system combating this
variability using a large-area, high-density sensor array and a robust
classification algorithm. EMG electrodes are fabricated on a flexible substrate
and interfaced to a custom wireless device for 64-channel signal acquisition
and streaming. We use brain-inspired high-dimensional (HD) computing for
processing EMG features in one-shot learning. The HD algorithm is tolerant to
noise and electrode misplacement and can quickly learn from few gestures
without gradient descent or back-propagation. We achieve an average
classification accuracy of 96.64% for five gestures, with only 7% degradation
when training and testing across different days. Our system maintains this
accuracy when trained with only three trials of gestures; it also demonstrates
comparable accuracy with the state-of-the-art when trained with one trial
Training a HyperDimensional Computing Classifier using a Threshold on its Confidence
Hyperdimensional computing (HDC) has become popular for light-weight and
energy-efficient machine learning, suitable for wearable Internet-of-Things
(IoT) devices and near-sensor or on-device processing. HDC is computationally
less complex than traditional deep learning algorithms and achieves moderate to
good classification performance. This article proposes to extend the training
procedure in HDC by taking into account not only wrongly classified samples,
but also samples that are correctly classified by the HDC model but with low
confidence. As such, a confidence threshold is introduced that can be tuned for
each dataset to achieve the best classification accuracy. The proposed training
procedure is tested on UCIHAR, CTG, ISOLET and HAND dataset for which the
performance consistently improves compared to the baseline across a range of
confidence threshold values. The extended training procedure also results in a
shift towards higher confidence values of the correctly classified samples
making the classifier not only more accurate but also more confident about its
predictions
Robust and Scalable Hyperdimensional Computing With Brain-Like Neural Adaptations
The Internet of Things (IoT) has facilitated many applications utilizing
edge-based machine learning (ML) methods to analyze locally collected data.
Unfortunately, popular ML algorithms often require intensive computations
beyond the capabilities of today's IoT devices. Brain-inspired hyperdimensional
computing (HDC) has been introduced to address this issue. However, existing
HDCs use static encoders, requiring extremely high dimensionality and hundreds
of training iterations to achieve reasonable accuracy. This results in a huge
efficiency loss, severely impeding the application of HDCs in IoT systems. We
observed that a main cause is that the encoding module of existing HDCs lacks
the capability to utilize and adapt to information learned during training. In
contrast, neurons in human brains dynamically regenerate all the time and
provide more useful functionalities when learning new information. While the
goal of HDC is to exploit the high-dimensionality of randomly generated base
hypervectors to represent the information as a pattern of neural activity, it
remains challenging for existing HDCs to support a similar behavior as brain
neural regeneration. In this work, we present dynamic HDC learning frameworks
that identify and regenerate undesired dimensions to provide adequate accuracy
with significantly lowered dimensionalities, thereby accelerating both the
training and inference.Comment: arXiv admin note: substantial text overlap with arXiv:2304.0550
Laelaps: An Energy-Efficient Seizure Detection Algorithm from Long-term Human iEEG Recordings without False Alarms
We propose Laelaps, an energy-efficient and fast learning algorithm with no false alarms for epileptic seizure detection from long-term intracranial electroencephalography (iEEG) signals. Laelaps uses end-to-end binary operations by exploiting symbolic dynamics and brain-inspired hyperdimensional computing. Laelaps's results surpass those yielded by state-of-the-art (SoA) methods [1], [2], [3], including deep learning, on a new very large dataset containing 116 seizures of 18 drug-resistant epilepsy patients in 2656 hours of recordings - each patient implanted with 24 to 128 iEEG electrodes. Laelaps trains 18 patient-specific models by using only 24 seizures: 12 models are trained with one seizure per patient, the others with two seizures. The trained models detect 79 out of 92 unseen seizures without any false alarms across all the patients as a big step forward in practical seizure detection. Importantly, a simple implementation of Laelaps on the Nvidia Tegra X2 embedded device achieves 1.7
7-3.9
7 faster execution and 1.4
7-2.9
7 lower energy consumption compared to the best result from the SoA methods. Our source code and anonymized iEEG dataset are freely available at http://ieeg-swez.ethz.ch
QubitHD: A Stochastic Acceleration Method for HD Computing-Based Machine Learning
Machine Learning algorithms based on Brain-inspired Hyperdimensional (HD)
computing imitate cognition by exploiting statistical properties of
high-dimensional vector spaces. It is a promising solution for achieving high
energy-efficiency in different machine learning tasks, such as classification,
semi-supervised learning and clustering. A weakness of existing HD
computing-based ML algorithms is the fact that they have to be binarized for
achieving very high energy-efficiency. At the same time, binarized models reach
lower classification accuracies. To solve the problem of the trade-off between
energy-efficiency and classification accuracy, we propose the QubitHD
algorithm. It stochastically binarizes HD-based algorithms, while maintaining
comparable classification accuracies to their non-binarized counterparts. The
FPGA implementation of QubitHD provides a 65% improvement in terms of
energy-efficiency, and a 95% improvement in terms of the training time, as
compared to state-of-the-art HD-based ML algorithms. It also outperforms
state-of-the-art low-cost classifiers (like Binarized Neural Networks) in terms
of speed and energy-efficiency by an order of magnitude during training and
inference.Comment: 8 pages, 7 figures, 3 table
Hyperdimensional Computing-based Multimodality Emotion Recognition with Physiological Signals
To interact naturally and achieve mutual sympathy between humans and machines, emotion recognition is one of the most important function to realize advanced human-computer interaction devices. Due to the high correlation between emotion and involuntary physiological changes, physiological signals are a prime candidate for emotion analysis. However, due to the need of a huge amount of training data for a high-quality machine learning model, computational complexity becomes a major bottleneck. To overcome this issue, brain-inspired hyperdimensional (HD) computing, an energy-efficient and fast learning computational paradigm, has a high potential to achieve a balance between accuracy and the amount of necessary training data. We propose an HD Computing-based Multimodality Emotion Recognition (HDC-MER). HDCMER maps real-valued features to binary HD vectors using a random nonlinear function, and further encodes them over time, and fuses across different modalities including GSR, ECG, and EEG. The experimental results show that, compared to the best method using the full training data, HDC-MER achieves higher classification accuracy for both valence (83.2% vs. 80.1%) and arousal (70.1% vs. 68.4%) using only 1/4 training data. HDC-MER also achieves at least 5% higher averaged accuracy compared to all the other methods in any point along the learning curve
The Hyperdimensional Transform for Distributional Modelling, Regression and Classification
Hyperdimensional computing (HDC) is an increasingly popular computing
paradigm with immense potential for future intelligent applications. Although
the main ideas already took form in the 1990s, HDC recently gained significant
attention, especially in the field of machine learning and data science. Next
to efficiency, interoperability and explainability, HDC offers attractive
properties for generalization as it can be seen as an attempt to combine
connectionist ideas from neural networks with symbolic aspects. In recent work,
we introduced the hyperdimensional transform, revealing deep theoretical
foundations for representing functions and distributions as high-dimensional
holographic vectors. Here, we present the power of the hyperdimensional
transform to a broad data science audience. We use the hyperdimensional
transform as a theoretical basis and provide insight into state-of-the-art HDC
approaches for machine learning. We show how existing algorithms can be
modified and how this transform can lead to a novel, well-founded toolbox. Next
to the standard regression and classification tasks of machine learning, our
discussion includes various aspects of statistical modelling, such as
representation, learning and deconvolving distributions, sampling, Bayesian
inference, and uncertainty estimation
- …