76 research outputs found
Systematic Review of Intelligent Algorithms in Gait Analysis and Prediction for Lower Limb Robotic Systems
The rate of development of robotic technologies has been meteoric, as a result of compounded advancements in hardware and software. Amongst these robotic technologies are active exoskeletons and orthoses, used in the assistive and rehabilitative fields. Artificial intelligence techniques are increasingly being utilised in gait analysis and prediction. This review paper systematically explores the current use of intelligent algorithms in gait analysis for robotic control, specifically the control of active lower limb exoskeletons and orthoses. Two databases, IEEE and Scopus, were screened for papers published between 1989 to May 2020. 41 papers met the eligibility criteria and were included in this review. 66.7% of the identified studies used classification models for the classification of gait phases and locomotion modes. Meanwhile, 33.3% implemented regression models for the estimation/prediction of kinematic parameters such as joint angles and trajectories, and kinetic parameters such as moments and torques. Deep learning algorithms have been deployed in ∼15% of the machine learning implementations. Other methodological parameters were reviewed, such as the sensor selection and the sample sizes used for training the models
On the Utility of Representation Learning Algorithms for Myoelectric Interfacing
Electrical activity produced by muscles during voluntary movement is a reflection of the firing patterns of relevant motor neurons and, by extension, the latent motor intent driving the movement. Once transduced via electromyography (EMG) and converted into digital form, this activity can be processed to provide an estimate of the original motor intent and is as such a feasible basis for non-invasive efferent neural interfacing. EMG-based motor intent decoding has so far received the most attention in the field of upper-limb prosthetics, where alternative means of interfacing are scarce and the utility of better control apparent. Whereas myoelectric prostheses have been available since the 1960s, available EMG control interfaces still lag behind the mechanical capabilities of the artificial limbs they are intended to steer—a gap at least partially due to limitations in current methods for translating EMG into appropriate motion commands. As the relationship between EMG signals and concurrent effector kinematics is highly non-linear and apparently stochastic, finding ways to accurately extract and combine relevant information from across electrode sites is still an active area of inquiry.This dissertation comprises an introduction and eight papers that explore issues afflicting the status quo of myoelectric decoding and possible solutions, all related through their use of learning algorithms and deep Artificial Neural Network (ANN) models. Paper I presents a Convolutional Neural Network (CNN) for multi-label movement decoding of high-density surface EMG (HD-sEMG) signals. Inspired by the successful use of CNNs in Paper I and the work of others, Paper II presents a method for automatic design of CNN architectures for use in myocontrol. Paper III introduces an ANN architecture with an appertaining training framework from which simultaneous and proportional control emerges. Paper Iv introduce a dataset of HD-sEMG signals for use with learning algorithms. Paper v applies a Recurrent Neural Network (RNN) model to decode finger forces from intramuscular EMG. Paper vI introduces a Transformer model for myoelectric interfacing that do not need additional training data to function with previously unseen users. Paper vII compares the performance of a Long Short-Term Memory (LSTM) network to that of classical pattern recognition algorithms. Lastly, paper vIII describes a framework for synthesizing EMG from multi-articulate gestures intended to reduce training burden
Biomechatronics: Harmonizing Mechatronic Systems with Human Beings
This eBook provides a comprehensive treatise on modern biomechatronic systems
centred around human applications. A particular emphasis is given to exoskeleton
designs for assistance and training with advanced interfaces in human-machine
interaction. Some of these designs are validated with experimental results which
the reader will find very informative as building-blocks for designing such systems.
This eBook will be ideally suited to those researching in biomechatronic area with
bio-feedback applications or those who are involved in high-end research on manmachine interfaces. This may also serve as a textbook for biomechatronic design
at post-graduate level
Fused mechanomyography and inertial measurement for human-robot interface
Human-Machine Interfaces (HMI) are the technology through which we interact with the ever-increasing quantity of smart devices surrounding us. The fundamental goal of an HMI is to facilitate robot control through uniting a human operator as the supervisor with a machine as the task executor. Sensors, actuators, and onboard intelligence have not reached the point where robotic manipulators may function with complete autonomy and therefore some form of HMI is still necessary in unstructured environments. These may include environments where direct human action is undesirable or infeasible, and situations where a robot must assist and/or interface with people. Contemporary literature has introduced concepts such as body-worn mechanical devices, instrumented gloves, inertial or electromagnetic motion tracking sensors on the arms, head, or legs, electroencephalographic (EEG) brain activity sensors, electromyographic (EMG) muscular activity sensors and camera-based (vision) interfaces to recognize hand gestures and/or track arm motions for assessment of operator intent and generation of robotic control signals. While these developments offer a wealth of future potential their utility has been largely restricted to laboratory demonstrations in controlled environments due to issues such as lack of portability and robustness and an inability to extract operator intent for both arm and hand motion.
Wearable physiological sensors hold particular promise for capture of human intent/command. EMG-based gesture recognition systems in particular have received significant attention in recent literature. As wearable pervasive devices, they offer benefits over camera or physical input systems in that they neither inhibit the user physically nor constrain the user to a location where the sensors are deployed. Despite these benefits, EMG alone has yet to demonstrate the capacity to recognize both gross movement (e.g. arm motion) and finer grasping (e.g. hand movement). As such, many researchers have proposed fusing muscle activity (EMG) and motion tracking e.g. (inertial measurement) to combine arm motion and grasp intent as HMI input for manipulator control. However, such work has arguably reached a plateau since EMG suffers from interference from environmental factors which cause signal degradation over time, demands an electrical connection with the skin, and has not demonstrated the capacity to function out of controlled environments for long periods of time.
This thesis proposes a new form of gesture-based interface utilising a novel combination of inertial measurement units (IMUs) and mechanomyography sensors (MMGs). The modular system permits numerous configurations of IMU to derive body kinematics in real-time and uses this to convert arm movements into control signals. Additionally, bands containing six mechanomyography sensors were used to observe muscular contractions in the forearm which are generated using specific hand motions. This combination of continuous and discrete control signals allows a large variety of smart devices to be controlled.
Several methods of pattern recognition were implemented to provide accurate decoding of the mechanomyographic information, including Linear Discriminant Analysis and Support Vector Machines. Based on these techniques, accuracies of 94.5% and 94.6% respectively were achieved for 12 gesture classification. In real-time tests, accuracies of 95.6% were achieved in 5 gesture classification.
It has previously been noted that MMG sensors are susceptible to motion induced interference. The thesis also established that arm pose also changes the measured signal. This thesis introduces a new method of fusing of IMU and MMG to provide a classification that is robust to both of these sources of interference.
Additionally, an improvement in orientation estimation, and a new orientation estimation algorithm are proposed. These improvements to the robustness of the system provide the first solution that is able to reliably track both motion and muscle activity for extended periods of time for HMI outside a clinical environment.
Application in robot teleoperation in both real-world and virtual environments were explored. With multiple degrees of freedom, robot teleoperation provides an ideal test platform for HMI devices, since it requires a combination of continuous and discrete control signals. The field of prosthetics also represents a unique challenge for HMI applications. In an ideal situation, the sensor suite should be capable of detecting the muscular activity in the residual limb which is naturally indicative of intent to perform a specific hand pose and trigger this post in the prosthetic device. Dynamic environmental conditions within a socket such as skin impedance have delayed the translation of gesture control systems into prosthetic devices, however mechanomyography sensors are unaffected by such issues.
There is huge potential for a system like this to be utilised as a controller as ubiquitous computing systems become more prevalent, and as the desire for a simple, universal interface increases. Such systems have the potential to impact significantly on the quality of life of prosthetic users and others.Open Acces
Deep learning approach to control of prosthetic hands with electromyography signals
Natural muscles provide mobility in response to nerve impulses.
Electromyography (EMG) measures the electrical activity of muscles in response
to a nerve's stimulation. In the past few decades, EMG signals have been used
extensively in the identification of user intention to potentially control
assistive devices such as smart wheelchairs, exoskeletons, and prosthetic
devices. In the design of conventional assistive devices, developers optimize
multiple subsystems independently. Feature extraction and feature description
are essential subsystems of this approach. Therefore, researchers proposed
various hand-crafted features to interpret EMG signals. However, the
performance of conventional assistive devices is still unsatisfactory. In this
paper, we propose a deep learning approach to control prosthetic hands with raw
EMG signals. We use a novel deep convolutional neural network to eschew the
feature-engineering step. Removing the feature extraction and feature
description is an important step toward the paradigm of end-to-end
optimization. Fine-tuning and personalization are additional advantages of our
approach. The proposed approach is implemented in Python with TensorFlow deep
learning library, and it runs in real-time in general-purpose graphics
processing units of NVIDIA Jetson TX2 developer kit. Our results demonstrate
the ability of our system to predict fingers position from raw EMG signals. We
anticipate our EMG-based control system to be a starting point to design more
sophisticated prosthetic hands. For example, a pressure measurement unit can be
added to transfer the perception of the environment to the user. Furthermore,
our system can be modified for other prosthetic devices.Comment: Conference. Houston, Texas, USA. September, 201
Machine Learning for Biomedical Application
Biomedicine is a multidisciplinary branch of medical science that consists of many scientific disciplines, e.g., biology, biotechnology, bioinformatics, and genetics; moreover, it covers various medical specialties. In recent years, this field of science has developed rapidly. This means that a large amount of data has been generated, due to (among other reasons) the processing, analysis, and recognition of a wide range of biomedical signals and images obtained through increasingly advanced medical imaging devices. The analysis of these data requires the use of advanced IT methods, which include those related to the use of artificial intelligence, and in particular machine learning. It is a summary of the Special Issue “Machine Learning for Biomedical Application”, briefly outlining selected applications of machine learning in the processing, analysis, and recognition of biomedical data, mostly regarding biosignals and medical images
Learning Biosignals with Deep Learning
The healthcare system, which is ubiquitously recognized as one of the most influential
system in society, is facing new challenges since the start of the decade.The myriad of
physiological data generated by individuals, namely in the healthcare system, is generating
a burden on physicians, losing effectiveness on the collection of patient data. Information
systems and, in particular, novel deep learning (DL) algorithms have been prompting a
way to take this problem.
This thesis has the aim to have an impact in biosignal research and industry by
presenting DL solutions that could empower this field. For this purpose an extensive study
of how to incorporate and implement Convolutional Neural Networks (CNN), Recursive
Neural Networks (RNN) and Fully Connected Networks in biosignal studies is discussed.
Different architecture configurations were explored for signal processing and decision
making and were implemented in three different scenarios: (1) Biosignal learning and
synthesis; (2) Electrocardiogram (ECG) biometric systems, and; (3) Electrocardiogram
(ECG) anomaly detection systems. In (1) a RNN-based architecture was able to replicate
autonomously three types of biosignals with a high degree of confidence. As for (2) three
CNN-based architectures, and a RNN-based architecture (same used in (1)) were used
for both biometric identification, reaching values above 90% for electrode-base datasets
(Fantasia, ECG-ID and MIT-BIH) and 75% for off-person dataset (CYBHi), and biometric
authentication, achieving Equal Error Rates (EER) of near 0% for Fantasia and MIT-BIH
and bellow 4% for CYBHi. As for (3) the abstraction of healthy clean the ECG signal
and detection of its deviation was made and tested in two different scenarios: presence of
noise using autoencoder and fully-connected network (reaching 99% accuracy for binary
classification and 71% for multi-class), and; arrhythmia events by including a RNN to the
previous architecture (57% accuracy and 61% sensitivity).
In sum, these systems are shown to be capable of producing novel results. The incorporation
of several AI systems into one could provide to be the next generation of
preventive medicine, as the machines have access to different physiological and anatomical
states, it could produce more informed solutions for the issues that one may face in the
future increasing the performance of autonomous preventing systems that could be used
in every-day life in remote places where the access to medicine is limited. These systems will also help the study of the signal behaviour and how they are made in real life context
as explainable AI could trigger this perception and link the inner states of a network with
the biological traits.O sistema de saúde, que é ubiquamente reconhecido como um dos sistemas mais influentes
da sociedade, enfrenta novos desafios desde o ínicio da década. A miríade de dados fisiológicos
gerados por indíviduos, nomeadamente no sistema de saúde, está a gerar um fardo
para os médicos, perdendo a eficiência no conjunto dos dados do paciente. Os sistemas de
informação e, mais espcificamente, da inovação de algoritmos de aprendizagem profunda
(DL) têm sido usados na procura de uma solução para este problema.
Esta tese tem o objetivo de ter um impacto na pesquisa e na indústria de biosinais,
apresentando soluções de DL que poderiam melhorar esta área de investigação. Para
esse fim, é discutido um extenso estudo de como incorporar e implementar redes neurais
convolucionais (CNN), redes neurais recursivas (RNN) e redes totalmente conectadas para
o estudo de biosinais.
Diferentes arquiteturas foram exploradas para processamento e tomada de decisão de
sinais e foram implementadas em três cenários diferentes: (1) Aprendizagem e síntese de
biosinais; (2) sistemas biométricos com o uso de eletrocardiograma (ECG), e; (3) Sistema
de detecção de anomalias no ECG. Em (1) uma arquitetura baseada na RNN foi capaz
de replicar autonomamente três tipos de sinais biológicos com um alto grau de confiança.
Quanto a (2) três arquiteturas baseadas em CNN e uma arquitetura baseada em RNN
(a mesma usada em (1)) foram usadas para ambas as identificações, atingindo valores
acima de 90 % para conjuntos de dados à base de eletrodos (Fantasia, ECG-ID e MIT
-BIH) e 75 % para o conjunto de dados fora da pessoa (CYBHi) e autenticação, atingindo
taxas de erro iguais (EER) de quase 0 % para Fantasia e MIT-BIH e abaixo de 4 % para
CYBHi. Quanto a (3) a abstração de sinais limpos e assimptomáticos de ECG e a detecção
do seu desvio foram feitas e testadas em dois cenários diferentes: na presença de ruído
usando um autocodificador e uma rede totalmente conectada (atingindo 99 % de precisão
na classificação binária e 71 % na multi-classe), e; eventos de arritmia incluindo um RNN
na arquitetura anterior (57 % de precisão e 61 % de sensibilidade).
Em suma, esses sistemas são mais uma vez demonstrados como capazes de produzir
resultados inovadores. A incorporação de vários sistemas de inteligência artificial em
um unico sistema pederá desencadear a próxima geração de medicina preventiva. Os
algoritmos ao terem acesso a diferentes estados fisiológicos e anatómicos, podem produzir
soluções mais informadas para os problemas que se possam enfrentar no futuro, aumentando o desempenho de sistemas autónomos de prevenção que poderiam ser usados na vida
quotidiana, nomeadamente em locais remotos onde o acesso à medicinas é limitado. Estes
sistemas também ajudarão o estudo do comportamento do sinal e como eles são feitos no
contexto da vida real, pois a IA explicável pode desencadear essa percepção e vincular os
estados internos de uma rede às características biológicas
Multi-modal EMG-based hand gesture classification for the control of a robotic prosthetic hand
Upper-limb myoelectric prosthesis control utilises electromyography (EMG)
signals as input and applies statistical and machine learning techniques to intuitively identify the user’s intended grasp. Surface EMG signals recorded with
electrodes attached on the user’s skin have been successfully used for prostheses
control in controlled lab conditions for decades. However, due to the stochastic
and non-stationary nature of the EMG signal, clinical use of pattern recognition
myoelectric control in everyday life conditions is limited.
This thesis performs an extensive literature review presenting the main causes
of the drift of EMG signals over time, ways of detecting such drifts and possible
techniques to counteract for their effects in the application of upper limb prostheses. Three approaches are investigated to provide more robust classification
performance under conditions of EMG signal drift; improving the classifier, in corporating extra sensory modalities and utilising transfer learning techniques
to improve between-subjects classification performance.
Linear Discriminant Analysis (LDA), is the baseline algorithm in myoelectric
grasp classification applications, providing good performance with low computational requirements. However, it assumes Gaussian distribution and shared co-variance between different classes, and its performance relies on hand-engineered
features. Deep Neural Networks (DNNs) have the advantage of learning the
features while training the classifier. In this thesis two deep learning models
have been successfully implemented for the grasp classification of EMG signals
achieving better performance than the baseline LDA algorithm. Moreover, deep
neural networks provide an easy basis for transfer learning knowledge and improving the adaptation capabilities of the classifier. An adaptation approach is
suggested and tested on the inter-subject classification task, demonstrating better performance when utilising pre-trained neural networks. Finally research
has suggested that adding extra sensory modalities along EMG, like Inertial
Measurement Unit (IMU) data, improves the classification performance of a classifier in comparison to utilising only EMG data for training. In this thesis ways of incorporating different sensory modalities have been suggested, both for the
LDA classifier and the DNNs, demonstrating the benefit of multi-modal grasp
classifier.The Edinburgh Centre for Robotics and EPSR
- …