36 research outputs found
Training a HyperDimensional Computing Classifier using a Threshold on its Confidence
Hyperdimensional computing (HDC) has become popular for light-weight and
energy-efficient machine learning, suitable for wearable Internet-of-Things
(IoT) devices and near-sensor or on-device processing. HDC is computationally
less complex than traditional deep learning algorithms and achieves moderate to
good classification performance. This article proposes to extend the training
procedure in HDC by taking into account not only wrongly classified samples,
but also samples that are correctly classified by the HDC model but with low
confidence. As such, a confidence threshold is introduced that can be tuned for
each dataset to achieve the best classification accuracy. The proposed training
procedure is tested on UCIHAR, CTG, ISOLET and HAND dataset for which the
performance consistently improves compared to the baseline across a range of
confidence threshold values. The extended training procedure also results in a
shift towards higher confidence values of the correctly classified samples
making the classifier not only more accurate but also more confident about its
predictions
Hyperdimensional computing: a fast, robust and interpretable paradigm for biological data
Advances in bioinformatics are primarily due to new algorithms for processing
diverse biological data sources. While sophisticated alignment algorithms have
been pivotal in analyzing biological sequences, deep learning has substantially
transformed bioinformatics, addressing sequence, structure, and functional
analyses. However, these methods are incredibly data-hungry, compute-intensive
and hard to interpret. Hyperdimensional computing (HDC) has recently emerged as
an intriguing alternative. The key idea is that random vectors of high
dimensionality can represent concepts such as sequence identity or phylogeny.
These vectors can then be combined using simple operators for learning,
reasoning or querying by exploiting the peculiar properties of high-dimensional
spaces. Our work reviews and explores the potential of HDC for bioinformatics,
emphasizing its efficiency, interpretability, and adeptness in handling
multimodal and structured data. HDC holds a lot of potential for various omics
data searching, biosignal analysis and health applications
Efficient Personalized Learning for Wearable Health Applications using HyperDimensional Computing
Health monitoring applications increasingly rely on machine learning
techniques to learn end-user physiological and behavioral patterns in everyday
settings. Considering the significant role of wearable devices in monitoring
human body parameters, on-device learning can be utilized to build personalized
models for behavioral and physiological patterns, and provide data privacy for
users at the same time. However, resource constraints on most of these wearable
devices prevent the ability to perform online learning on them. To address this
issue, it is required to rethink the machine learning models from the
algorithmic perspective to be suitable to run on wearable devices.
Hyperdimensional computing (HDC) offers a well-suited on-device learning
solution for resource-constrained devices and provides support for
privacy-preserving personalization. Our HDC-based method offers flexibility,
high efficiency, resilience, and performance while enabling on-device
personalization and privacy protection. We evaluate the efficacy of our
approach using three case studies and show that our system improves the energy
efficiency of training by up to compared with the state-of-the-art
Deep Neural Network (DNN) algorithms while offering a comparable accuracy
Efficient emotion recognition using hyperdimensional computing with combinatorial channel encoding and cellular automata
In this paper, a hardware-optimized approach to emotion recognition based on
the efficient brain-inspired hyperdimensional computing (HDC) paradigm is
proposed. Emotion recognition provides valuable information for human-computer
interactions, however the large number of input channels (>200) and modalities
(>3) involved in emotion recognition are significantly expensive from a memory
perspective. To address this, methods for memory reduction and optimization are
proposed, including a novel approach that takes advantage of the combinatorial
nature of the encoding process, and an elementary cellular automaton. HDC with
early sensor fusion is implemented alongside the proposed techniques achieving
two-class multi-modal classification accuracies of >76% for valence and >73%
for arousal on the multi-modal AMIGOS and DEAP datasets, almost always better
than state of the art. The required vector storage is seamlessly reduced by 98%
and the frequency of vector requests by at least 1/5. The results demonstrate
the potential of efficient hyperdimensional computing for low-power,
multi-channeled emotion recognition tasks
Integer Echo State Networks: Hyperdimensional Reservoir Computing
We propose an approximation of Echo State Networks (ESN) that can be
efficiently implemented on digital hardware based on the mathematics of
hyperdimensional computing. The reservoir of the proposed Integer Echo State
Network (intESN) is a vector containing only n-bits integers (where n<8 is
normally sufficient for a satisfactory performance). The recurrent matrix
multiplication is replaced with an efficient cyclic shift operation. The intESN
architecture is verified with typical tasks in reservoir computing: memorizing
of a sequence of inputs; classifying time-series; learning dynamic processes.
Such an architecture results in dramatic improvements in memory footprint and
computational efficiency, with minimal performance loss.Comment: 10 pages, 10 figures, 1 tabl
Brain-Inspired Hyperdimensional Computing: How Thermal-Friendly for Edge Computing?
Brain-inspired hyperdimensional computing (HDC) is an emerging machine
learning (ML) methods. It is based on large vectors of binary or bipolar
symbols and a few simple mathematical operations. The promise of HDC is a
highly efficient implementation for embedded systems like wearables. While fast
implementations have been presented, other constraints have not been considered
for edge computing. In this work, we aim at answering how thermal-friendly HDC
for edge computing is. Devices like smartwatches, smart glasses, or even mobile
systems have a restrictive cooling budget due to their limited volume. Although
HDC operations are simple, the vectors are large, resulting in a high number of
CPU operations and thus a heavy load on the entire system potentially causing
temperature violations. In this work, the impact of HDC on the chip's
temperature is investigated for the first time. We measure the temperature and
power consumption of a commercial embedded system and compare HDC with
conventional CNN. We reveal that HDC causes up to 6.8{\deg}C higher
temperatures and leads to up to 47% more CPU throttling. Even when both HDC and
CNN aim for the same throughput (i.e., perform a similar number of
classifications per second), HDC still causes higher on-chip temperatures due
to the larger power consumption.Comment: 4 pages, 3 figure
Optimized Biosignals Processing Algorithms for New Designs of Human Machine Interfaces on Parallel Ultra-Low Power Architectures
The aim of this dissertation is to explore Human Machine Interfaces (HMIs) in a variety of biomedical scenarios. The research addresses typical challenges in wearable and implantable devices for diagnostic, monitoring, and prosthetic purposes, suggesting a methodology for tailoring such applications to cutting edge embedded architectures.
The main challenge is the enhancement of high-level applications, also introducing Machine Learning (ML) algorithms, using parallel programming and specialized hardware to improve the performance.
The majority of these algorithms are computationally intensive, posing significant challenges for the deployment on embedded devices, which have several limitations in term of memory size, maximum operative frequency, and battery duration.
The proposed solutions take advantage of a Parallel Ultra-Low Power (PULP) architecture, enhancing the elaboration on specific target architectures, heavily optimizing the execution, exploiting software and hardware resources.
The thesis starts by describing a methodology that can be considered a guideline to efficiently implement algorithms on embedded architectures.
This is followed by several case studies in the biomedical field, starting with the analysis of a Hand Gesture Recognition, based on the Hyperdimensional Computing algorithm, which allows performing a fast on-chip re-training, and a comparison with the state-of-the-art Support Vector Machine (SVM); then a Brain Machine Interface (BCI) to detect the respond of the brain to a visual stimulus follows in the manuscript. Furthermore, a seizure detection application is also presented, exploring different solutions for the dimensionality reduction of the input signals. The last part is dedicated to an exploration of typical modules for the development of optimized ECG-based applications
Optimizing AI at the Edge: from network topology design to MCU deployment
The first topic analyzed in the thesis will be Neural Architecture Search (NAS).
I will focus on two different tools that I developed, one to optimize the architecture of Temporal Convolutional Networks (TCNs), a convolutional model for time-series processing that has recently emerged, and one to optimize the data precision of tensors inside CNNs.
The first NAS proposed explicitly targets the optimization of the most peculiar architectural parameters of TCNs, namely dilation, receptive field, and the number of features in each layer. Note that this is the first NAS that explicitly targets these networks.
The second NAS proposed instead focuses on finding the most efficient data format for a target CNN, with the granularity of the layer filter. Note that applying these two NASes in sequence allows an "application designer" to minimize the structure of the neural network employed, minimizing the number of operations or the memory usage of the network.
After that, the second topic described is the optimization of neural network deployment on edge devices. Importantly, exploiting edge platforms' scarce resources is critical for NN efficient execution on MCUs.
To do so, I will introduce DORY (Deployment Oriented to memoRY) -- an automatic tool to deploy CNNs on low-cost MCUs.
DORY, in different steps, can manage different levels of memory inside the MCU automatically, offload the computation workload (i.e., the different layers of a neural network) to dedicated hardware accelerators, and automatically generates ANSI C code that orchestrates off- and on-chip transfers with the computation phases.
On top of this, I will introduce two optimized computation libraries that DORY can exploit to deploy TCNs and Transformers on edge efficiently.
I conclude the thesis with two different applications on bio-signal analysis, i.e., heart rate tracking and sEMG-based gesture recognition
Novel Muscle Monitoring by Radiomyography(RMG) and Application to Hand Gesture Recognition
Conventional electromyography (EMG) measures the continuous neural activity
during muscle contraction, but lacks explicit quantification of the actual
contraction. Mechanomyography (MMG) and accelerometers only measure body
surface motion, while ultrasound, CT-scan and MRI are restricted to in-clinic
snapshots. Here we propose a novel radiomyography (RMG) for continuous muscle
actuation sensing that can be wearable and touchless, capturing both
superficial and deep muscle groups. We verified RMG experimentally by a forearm
wearable sensor for detailed hand gesture recognition. We first converted the
radio sensing outputs to the time-frequency spectrogram, and then employed the
vision transformer (ViT) deep learning network as the classification model,
which can recognize 23 gestures with an average accuracy up to 99% on 8
subjects. By transfer learning, high adaptivity to user difference and sensor
variation were achieved at an average accuracy up to 97%. We further
demonstrated RMG to monitor eye and leg muscles and achieved high accuracy for
eye movement and body postures tracking. RMG can be used with synchronous EMG
to derive stimulation-actuation waveforms for many future applications in
kinesiology, physiotherapy, rehabilitation, and human-machine interface