357 research outputs found
An EMG Gesture Recognition System with Flexible High-Density Sensors and Brain-Inspired High-Dimensional Classifier
EMG-based gesture recognition shows promise for human-machine interaction.
Systems are often afflicted by signal and electrode variability which degrades
performance over time. We present an end-to-end system combating this
variability using a large-area, high-density sensor array and a robust
classification algorithm. EMG electrodes are fabricated on a flexible substrate
and interfaced to a custom wireless device for 64-channel signal acquisition
and streaming. We use brain-inspired high-dimensional (HD) computing for
processing EMG features in one-shot learning. The HD algorithm is tolerant to
noise and electrode misplacement and can quickly learn from few gestures
without gradient descent or back-propagation. We achieve an average
classification accuracy of 96.64% for five gestures, with only 7% degradation
when training and testing across different days. Our system maintains this
accuracy when trained with only three trials of gestures; it also demonstrates
comparable accuracy with the state-of-the-art when trained with one trial
User Identification Using Gait Patterns on UbiFloorII
This paper presents a system of identifying individuals by their gait patterns. We take into account various distinguishable features that can be extracted from a userās gait and then divide them into two classes: walking pattern and stepping pattern. The conditions we assume are that our target environments are domestic areas, the number of users is smaller than 10, and all users ambulate with bare feet considering the everyday lifestyle of the Korean home. Under these conditions, we have developed a system that identifies individualsā gait patterns using our biometric sensor, UbiFloorII. We have created UbiFloorII to collect walking samples and created software modules to extract the userās gait pattern. To identify the users based on the gait patterns extracted from walking samples over UbiFloorII, we have deployed multilayer perceptron network, a feedforward artificial neural network model. The results show that both walking pattern and stepping pattern extracted from usersā gait over the UbiFloorII are distinguishable enough to identify the users and that fusing two classifiers at the matching score level improves the recognition accuracy. Therefore, our proposed system may provide unobtrusive and automatic user identification methods in ubiquitous computing environments, particularly in domestic areas
Human Activity Recognition for AI-Enabled Healthcare Using Low-Resolution Infrared Sensor Data
This paper explores the feasibility of using low-resolution infrared (LRIR) image streams for human activity recognition (HAR) with potential application in e-healthcare. Two datasets based on synchronized multichannel LRIR sensors systems are considered for a comprehensive study about optimal data acquisition. A novel noise reduction technique is proposed for alleviating the effects of horizontal and vertical periodic noise in the 2D spatiotemporal activity profiles created by vectorizing and concatenating the LRIR frames. Two main analysis strategies are explored for HAR, including (1) manual feature extraction using texture-based and orthogonal-transformation-based techniques, followed by classification using support vector machine (SVM), random forest (RF), k-nearest neighbor (k-NN), and logistic regression (LR), and (2) deep neural network (DNN) strategy based on a convolutional long short-term memory (LSTM). The proposed periodic noise reduction technique showcases an increase of up to 14.15% using different models. In addition, for the first time, the optimum number of sensors, sensor layout, and distance to subjects are studied, indicating the optimum results based on a single side sensor at a close distance. Reasonable accuracies are achieved in the case of sensor displacement and robustness in detection of multiple subjects. Furthermore, the models show suitability for data collected in different environments
A Novel Two Stream Decision Level Fusion of Vision and Inertial Sensors Data for Automatic Multimodal Human Activity Recognition System
This paper presents a novel multimodal human activity recognition system. It
uses a two-stream decision level fusion of vision and inertial sensors. In the
first stream, raw RGB frames are passed to a part affinity field-based pose
estimation network to detect the keypoints of the user. These keypoints are
then pre-processed and inputted in a sliding window fashion to a specially
designed convolutional neural network for the spatial feature extraction
followed by regularized LSTMs to calculate the temporal features. The outputs
of LSTM networks are then inputted to fully connected layers for
classification. In the second stream, data obtained from inertial sensors are
pre-processed and inputted to regularized LSTMs for the feature extraction
followed by fully connected layers for the classification. At this stage, the
SoftMax scores of two streams are then fused using the decision level fusion
which gives the final prediction. Extensive experiments are conducted to
evaluate the performance. Four multimodal standard benchmark datasets (UP-Fall
detection, UTD-MHAD, Berkeley-MHAD, and C-MHAD) are used for experimentations.
The accuracies obtained by the proposed system are 96.9 %, 97.6 %, 98.7 %, and
95.9 % respectively on the UP-Fall Detection, UTDMHAD, Berkeley-MHAD, and
C-MHAD datasets. These results are far superior than the current
state-of-the-art methods
Neuromorphic hardware for somatosensory neuroprostheses
In individuals with sensory-motor impairments, missing limb functions can be restored using neuroprosthetic devices that directly interface with the nervous system. However, restoring the natural tactile experience through electrical neural stimulation requires complex encoding strategies. Indeed, they are presently limited in effectively conveying or restoring tactile sensations by bandwidth constraints. Neuromorphic technology, which mimics the natural behavior of neurons and synapses, holds promise for replicating the encoding of natural touch, potentially informing neurostimulation design. In this perspective, we propose that incorporating neuromorphic technologies into neuroprostheses could be an effective approach for developing more natural human-machine interfaces, potentially leading to advancements in device performance, acceptability, and embeddability. We also highlight ongoing challenges and the required actions to facilitate the future integration of these advanced technologies
Error-triggered Three-Factor Learning Dynamics for Crossbar Arrays
Recent breakthroughs suggest that local, approximate gradient descent
learning is compatible with Spiking Neural Networks (SNNs). Although SNNs can
be scalably implemented using neuromorphic VLSI, an architecture that can learn
in-situ as accurately as conventional processors is still missing. Here, we
propose a subthreshold circuit architecture designed through insights obtained
from machine learning and computational neuroscience that could achieve such
accuracy. Using a surrogate gradient learning framework, we derive local,
error-triggered learning dynamics compatible with crossbar arrays and the
temporal dynamics of SNNs. The derivation reveals that circuits used for
inference and training dynamics can be shared, which simplifies the circuit and
suppresses the effects of fabrication mismatch. We present SPICE simulations on
XFAB 180nm process, as well as large-scale simulations of the spiking neural
networks on event-based benchmarks, including a gesture recognition task. Our
results show that the number of updates can be reduced hundred-fold compared to
the standard rule while achieving performances that are on par with the
state-of-the-art
- ā¦