61 research outputs found
Fused mechanomyography and inertial measurement for human-robot interface
Human-Machine Interfaces (HMI) are the technology through which we interact with the ever-increasing quantity of smart devices surrounding us. The fundamental goal of an HMI is to facilitate robot control through uniting a human operator as the supervisor with a machine as the task executor. Sensors, actuators, and onboard intelligence have not reached the point where robotic manipulators may function with complete autonomy and therefore some form of HMI is still necessary in unstructured environments. These may include environments where direct human action is undesirable or infeasible, and situations where a robot must assist and/or interface with people. Contemporary literature has introduced concepts such as body-worn mechanical devices, instrumented gloves, inertial or electromagnetic motion tracking sensors on the arms, head, or legs, electroencephalographic (EEG) brain activity sensors, electromyographic (EMG) muscular activity sensors and camera-based (vision) interfaces to recognize hand gestures and/or track arm motions for assessment of operator intent and generation of robotic control signals. While these developments offer a wealth of future potential their utility has been largely restricted to laboratory demonstrations in controlled environments due to issues such as lack of portability and robustness and an inability to extract operator intent for both arm and hand motion.
Wearable physiological sensors hold particular promise for capture of human intent/command. EMG-based gesture recognition systems in particular have received significant attention in recent literature. As wearable pervasive devices, they offer benefits over camera or physical input systems in that they neither inhibit the user physically nor constrain the user to a location where the sensors are deployed. Despite these benefits, EMG alone has yet to demonstrate the capacity to recognize both gross movement (e.g. arm motion) and finer grasping (e.g. hand movement). As such, many researchers have proposed fusing muscle activity (EMG) and motion tracking e.g. (inertial measurement) to combine arm motion and grasp intent as HMI input for manipulator control. However, such work has arguably reached a plateau since EMG suffers from interference from environmental factors which cause signal degradation over time, demands an electrical connection with the skin, and has not demonstrated the capacity to function out of controlled environments for long periods of time.
This thesis proposes a new form of gesture-based interface utilising a novel combination of inertial measurement units (IMUs) and mechanomyography sensors (MMGs). The modular system permits numerous configurations of IMU to derive body kinematics in real-time and uses this to convert arm movements into control signals. Additionally, bands containing six mechanomyography sensors were used to observe muscular contractions in the forearm which are generated using specific hand motions. This combination of continuous and discrete control signals allows a large variety of smart devices to be controlled.
Several methods of pattern recognition were implemented to provide accurate decoding of the mechanomyographic information, including Linear Discriminant Analysis and Support Vector Machines. Based on these techniques, accuracies of 94.5% and 94.6% respectively were achieved for 12 gesture classification. In real-time tests, accuracies of 95.6% were achieved in 5 gesture classification.
It has previously been noted that MMG sensors are susceptible to motion induced interference. The thesis also established that arm pose also changes the measured signal. This thesis introduces a new method of fusing of IMU and MMG to provide a classification that is robust to both of these sources of interference.
Additionally, an improvement in orientation estimation, and a new orientation estimation algorithm are proposed. These improvements to the robustness of the system provide the first solution that is able to reliably track both motion and muscle activity for extended periods of time for HMI outside a clinical environment.
Application in robot teleoperation in both real-world and virtual environments were explored. With multiple degrees of freedom, robot teleoperation provides an ideal test platform for HMI devices, since it requires a combination of continuous and discrete control signals. The field of prosthetics also represents a unique challenge for HMI applications. In an ideal situation, the sensor suite should be capable of detecting the muscular activity in the residual limb which is naturally indicative of intent to perform a specific hand pose and trigger this post in the prosthetic device. Dynamic environmental conditions within a socket such as skin impedance have delayed the translation of gesture control systems into prosthetic devices, however mechanomyography sensors are unaffected by such issues.
There is huge potential for a system like this to be utilised as a controller as ubiquitous computing systems become more prevalent, and as the desire for a simple, universal interface increases. Such systems have the potential to impact significantly on the quality of life of prosthetic users and others.Open Acces
Development and optimization of a low-cost myoelectric upper limb prosthesis
Tese de Mestrado Integrado, Engenharia Biomédica e Biofísica (Engenharia Clínica e Instrumentação Médica), 2022, Universidade de Lisboa, Faculdade de CiênciasIn recent years, the increase in the number of accidents, chronic diseases, such as diabetes, and
the impoverishment of certain developing countries have contributed to a significant increase in
prostheses users. The loss of a particular limb entails numerous changes in the daily life of each user,
which are amplified when the user loses their hand. Therefore, replacing the hand is an urgent necessity.
Developing upper limb prostheses will allow the re-establishment of the physical and motor functions
of the upper limb as well as reduction of the rates of depression. Therefore, the prosthetic industry has
been reinventing itself and evolving. It is already possible to control a prosthesis through the user's
myoelectric signals, control known as pattern recognition control. In addition, additive manufacturing
technologies such as 3D printing have gained strength in prosthetics. The use of this type of technology
allows the product to reach the user much faster and reduces the weight of the devices, making them
lighter. Despite these advances, the rejection rate of this type of device is still high since most prostheses
available on the market are slow, expensive and heavy. Because of that, academia and institutions have
been investigating ways to overcome these limitations. Nevertheless, the dependence on the number of
acquisition channels is still limiting since most users do not have a large available forearm surface area
to acquire the user’s myoelectric signals.
This work intends to solve some of these problems and answer the questions imposed by the
industry and researchers. The main objective is to test if developing a subject independent, fast and
simple microcontroller is possible. Subsequently, we recorded data from forty volunteers through the
BIOPAC acquisition system. After that, the signals were filtered through two different processes. The
first was digital filtering and the application of wavelet threshold noise reduction. Later, the signal was
divided into smaller windows (100 and 250 milliseconds) and thirteen features were extracted in the
temporal domain. During all these steps, the MatLab® software was used. After extraction, three feature
selection methods were used to optimize the classification process, where machine learning algorithms
are implemented. The classification was divided into different parts. First, the classifier had to
distinguish whether the volunteer was making some movement or was at rest. In the case of detected
movement, the classifier would have to, on a second level, try to understand if they were moving only
one finger or performing a movement that involved the flexion of more than one finger (grip). If the
volunteer was performing a grip on the third level, the classifier would have to identify whether the
volunteer was performing a spherical or triad grip. Finally, to understand the influence of the database
on the classification, two methods were used: cross-validation and split validation.
After analysing the results, the e-NABLE Unlimbited arm was printed on The Original Prusa i3
MK3, where polylactic acid (PLA) was used.
This dissertation showed that the results obtained in the 250-millisecond window were better than
the obtained ones in a 100-millisecond window. In general, the best classifier was the K-Nearest
Neighbours (KNN) with k=2, except for the first level that was LDA. The best results were obtained for
the first classification level, with an accuracy greater than 90%. Although the results obtained for the
second and third levels were close to 80%, it was concluded that it was impossible to develop a
microcontroller dependent only on one acquisition channel. These results agree with the anatomical
characteristics since they are originated from the same muscle group. The cross-validation results were
lower than those obtained in the training-test methodology, which allowed us to conclude that the inter variability that exists between the subjects significantly affects the classification performance.
Furthermore, both the dominant and non-dominant arms were used in this work, which also increased
the discrepancy between signals. Indeed, the results showed that it is impossible to develop a
microcontroller adaptable to all users. Therefore, in the future, the best path will be to opt for the
customization of the prototype. In order to test the implementation of a microcontroller in the printed model, it was necessary to design a support structure in Solidworks that would support the motors used
to flex the fingers and Arduino to control the motors. Consequently, the e-NABLE model was re adapted, making it possible to develop a clinical training prototype. Even though it is a training
prototype, it is lighter than those on the market and cheaper.
The objectives of this work have been fulfilled and many answers have been given. However,
there is always space for improvement. Although, this dissertation has some limitations, it certainly
contributed to clarify many of the doubts that still exist in the scientific community. Hopefully, it will
help to further develop the prosthetic industry.Nos últimos anos, o aumento do número de acidentes por doenças crónicas, como, por exemplo,
a diabetes, e o empobrecimento de determinados países em desenvolvimento têm contribuído para um
aumento significativo no número de utilizadores de próteses. A perda de um determinado membro
acarreta inúmeras mudanças no dia-a-dia de cada utilizador. Estas são amplificadas quando a perda é
referente à mão ou parte do antebraço. A mão é uma ferramenta essencial no dia-a-dia de cada ser
humano, uma vez que é através dela que são realizadas as atividades básicas, como, por exemplo, tomar
banho, lavar os dentes, comer, preparar refeições, etc. A substituição desta ferramenta é, portanto, uma
necessidade, não só porque permitirá restabelecer as funções físicas e motoras do membro superior,
como, também, reduzirá o nível de dependência destes utilizadores de outrem e, consequentemente, das
taxas de depressão. Para colmatar as necessidades dos utilizadores, a indústria prostética tem-se
reinventado e evoluído, desenvolvendo próteses para o membro superior cada vez mais sofisticadas.
Com efeito, já é possível controlar uma prótese através da leitura e análise dos sinais mioelétricos do
próprio utilizador, o que é denominado por muitos investigadores de controlo por reconhecimento de
padrões. Este tipo de controlo é personalizável e permite adaptar a prótese a cada utilizador. Para além
do uso de sinais elétricos provenientes do musculo do utilizador, a impressão 3D, uma técnica de
manufatura aditiva, têm ganho força no campo da prostética. Por conseguinte, nos últimos anos os
investigadores têm impresso inúmeros modelos com diferentes materiais que vão desde o uso de
termoplásticos, ao uso de materiais flexíveis. A utilização deste tipo de tecnologia permite, para além
de uma rápida entrega do produto ao utilizador, uma diminuição no tempo de construção de uma prótese
tornando-a mais leve e barata. Além do mais, a impressão 3D permite criar protótipos mais sustentáveis,
uma vez que existe uma redução na quantidade de material desperdiçado. Embora já existam inúmeras
soluções, a taxa de rejeição deste tipo de dispositivos é ainda bastante elevada, uma vez que a maioria
das próteses disponíveis no mercado, nomeadamente as mioelétricas, são lentas, caras e pesadas. Ainda
que existam alguns estudos que se debrucem neste tipo de tecnologias, bem como na sua evolução
científica, o número de elétrodos utilizados é ainda significativo. Desta forma, e, tendo em conta que a
maioria dos utilizadores não possuí uma área de superfície do antebraço suficiente para ser feita a
aquisição dos sinais mioelétricos, o trabalho feito pela academia não se revelou tão contributivo para a
indústria prostética como este prometia inicialmente.
Este trabalho pretende resolver alguns desses problemas e responder às questões mais impostas
pela indústria e investigadores, para que, no futuro, o número de utilizadores possa aumentar, assim
como o seu índice de satisfação relativamente ao produto. Para tal, recolheram-se os sinais mioelétricos
de quarenta voluntários, através do sistema de aquisição BIOPAC. Após a recolha, filtraram-se os sinais
de seis voluntários através de dois processos diferentes. No primeiro, utilizaram-se filtros digitais e no
segundo aplicou-se a transformada de onda para a redução do ruído. De seguida, o sinal foi segmentado
em janelas mais pequenas de 100 e 250 milissegundos e extraíram-se treze features no domínio temporal.
Para que o processo de classificação fosse otimizado, foram aplicados três métodos de seleção de
features. A classificação foi dividida em três níveis diferentes nos quais dois algoritmos de
aprendizagem automática foram implementados, individualmente. No primeiro nível, o objetivo foi a
distinção entre os momentos em que o voluntário fazia movimento ou que estava em repouso. Caso o
output do classificador fosse a classe movimento, este teria de, num segundo nível, tentar perceber se o
voluntário estaria a mexer apenas um dedo ou a realizar um movimento que envolvesse a flexão de mais
de que um dedo (preensão). No caso de uma preensão, passava-se ao terceiro nível onde o classificador
teria de identificar se o voluntário estaria a realizar a preensão esférica ou em tríade. Para todos os níveis
de classificação, obtiveram-se resultados para o método de validação cruzada e o método de teste e
treino, sendo que neste, 70% dos dados foram utilizados como conjunto de treino e 30% como teste.
Efetuada a análise dos resultados, escolheu-se um dos modelos da comunidade e-NABLE. O modelo foi
impresso na impressora The Original Prusa i3 MK3S e o material escolhido foi o ácido poliláctico
(PLA). Para que fosse possível testar a implementação de um microcontrolador num modelo que
originalmente depende da flexão do cotovelo realizada pelo utilizador, foi necessário desenhar uma
estrutura de suporte que suportasse, não só os motores utilizados para flexionar os dedos, como, também,
o Arduíno. O suporte desenhado foi impresso com o mesmo material e com a mesma impressora.
Os resultados obtidos mostraram que a janela de 250 milissegundo foi a melhor e que, regra geral,
o melhor classificador é o K-Nearest Neighbors (KNN) com k=2, com exceção do primeiro nível, em
que o melhor classificador foi o Linear Discriminant Analysis (LDA). Os melhores resultados
obtiveram-se no primeiro nível de classificação onde a accuracy foi superior a 90%. Embora os
resultados obtidos para o segundo e terceiro nível tenham sido próximos de 80%, concluiu-se que não
era possível desenvolver um microcontrolador dependente apenas de um canal de aquisição. Tal era
expectável, uma vez que os movimentos estudados são originados pelo mesmo grupo muscular e a
intervariabilidade dos sujeitos um fator significativo. Os resultados da validação cruzada foram menos
precisos do que os obtidos para a metodologia de treino-teste, o que permitiu concluir que a
intervariabilidade existente entre os voluntários afeta significativamente o processo de classificação.
Para além disso, os voluntários utilizaram o braço dominante e o braço não dominante, o que acabou
por aumentar a discrepância entre os sinais recolhidos. Com efeito, os resultados mostraram que não é
possível desenvolver um microcontrolador que seja adaptável a todos os utilizadores e, portanto, no
futuro, o melhor caminho será optar pela personalização do protótipo. Tendo o conhecimento prévio
desta evidência, o protótipo desenvolvido neste trabalho apenas servirá como protótipo de treino para o
utilizador. Ainda assim, este é bem mais leve que os existentes no mercado e muito mais barato. Nele é
ainda possível testar e controlar alguns dos componentes que no futuro irão fazer parte da prótese
completa, prevenindo acidentes.
Não obstante o cumprimento dos objetivos deste trabalho e das muitas respostas que por ele foram
dadas, existe sempre espaço para melhorias. Dado à limitação de tempo, não foi possível testar o
microcontrolador em tempo-real nem efetuar testes mecânicos de flexibilidade e resistência dos
materiais da prótese. Deste modo, seria interessante no futuro fazer testes de performance em tempo real
e submeter a prótese a condições extremas, para que a tensão elástica e a tensão dos pins sejam testadas.
Para além disso, testar os mecanismos de segurança da prótese quando o utilizador tem de fazer muita
força é fundamental. O teste destes parâmetros evitará a ocorrência de falhas que poderão magoar o
utilizador, bem como estragar os objetos com os quais a prótese poderá interagir. Por fim, é necessário
melhorar o aspeto cosmético das próteses. Para que isso aconteça, poderão ser utilizados polímeros com
uma coloração próxima do tom da pele do utilizador. Uma outra forma de melhorar este aspeto, seria
fazer o scanning do braço saudável do utilizador e usar materiais flexíveis para as articulações e dedos
que, juntamente com uma palma de termoplásticos resistentes e um microcontrolador, permitissem um
movimento bastante natural próximo do biológico.
Em suma, apesar de algumas limitações, este trabalho contribuiu para o esclarecimento de muitas
das dúvidas que ainda existiam na comunidade científica e ajudará a desenvolver a indústria prostética
Support vector machines to detect physiological patterns for EEG and EMG-based human-computer interaction:a review
Support vector machines (SVMs) are widely used classifiers for detecting physiological patterns in human-computer interaction (HCI). Their success is due to their versatility, robustness and large availability of free dedicated toolboxes. Frequently in the literature, insufficient details about the SVM implementation and/or parameters selection are reported, making it impossible to reproduce study analysis and results. In order to perform an optimized classification and report a proper description of the results, it is necessary to have a comprehensive critical overview of the applications of SVM. The aim of this paper is to provide a review of the usage of SVM in the determination of brain and muscle patterns for HCI, by focusing on electroencephalography (EEG) and electromyography (EMG) techniques. In particular, an overview of the basic principles of SVM theory is outlined, together with a description of several relevant literature implementations. Furthermore, details concerning reviewed papers are listed in tables and statistics of SVM use in the literature are presented. Suitability of SVM for HCI is discussed and critical comparisons with other classifiers are reported
Kernel density estimation of electromyographic signals and ensemble learning for highly accurate classification of a large set of hand/wrist motions
The performance of myoelectric control highly depends on the features extracted from surface electromyographic (sEMG) signals. We propose three new sEMG features based on the kernel density estimation. The trimmed mean of density (TMD), the entropy of density, and the trimmed mean absolute value of derivative density were computed for each sEMG channel. These features were tested for the classification of single tasks as well as of two tasks concurrently performed. For single tasks, correlation-based feature selection was used, and the features were then classified using linear discriminant analysis (LDA), non-linear support vector machines, and multi-layer perceptron. The eXtreme gradient boosting (XGBoost) classifier was used for the classification of two movements simultaneously performed. The second and third versions of the Ninapro dataset (conventional control) and Ameri’s movement dataset (simultaneous control) were used to test the proposed features. For the Ninapro dataset, the overall accuracy of LDA using the TMD feature was 98.99 ± 1.36% and 92.25 ± 9.48% for able-bodied and amputee subjects, respectively. Using ensemble learning of the three classifiers, the average macro and micro-F-score, macro recall, and precision on the validation sets were 98.23 ± 2.02, 98.32 ± 1.93, 98.32 ± 1.93, and 98.88 ± 1.31%, respectively, for the intact subjects. The movement misclassification percentage was 1.75 ± 1.73 and 3.44 ± 2.23 for the intact subjects and amputees. The proposed features were significantly correlated with the movement classes [Generalized Linear Model (GLM); P-value < 0.05]. An accurate online implementation of the proposed algorithm was also presented. For the simultaneous control, the overall accuracy was 99.71 ± 0.08 and 97.85 ± 0.10 for the XGBoost and LDA classifiers, respectively. The proposed features are thus promising for conventional and simultaneous myoelectric control.Peer ReviewedPostprint (published version
Algorithms for Neural Prosthetic Applications
abstract: In the last 15 years, there has been a significant increase in the number of motor neural prostheses used for restoring limb function lost due to neurological disorders or accidents. The aim of this technology is to enable patients to control a motor prosthesis using their residual neural pathways (central or peripheral). Recent studies in non-human primates and humans have shown the possibility of controlling a prosthesis for accomplishing varied tasks such as self-feeding, typing, reaching, grasping, and performing fine dexterous movements. A neural decoding system comprises mainly of three components: (i) sensors to record neural signals, (ii) an algorithm to map neural recordings to upper limb kinematics and (iii) a prosthetic arm actuated by control signals generated by the algorithm. Machine learning algorithms that map input neural activity to the output kinematics (like finger trajectory) form the core of the neural decoding system. The choice of the algorithm is thus, mainly imposed by the neural signal of interest and the output parameter being decoded. The various parts of a neural decoding system are neural data, feature extraction, feature selection, and machine learning algorithm. There have been significant advances in the field of neural prosthetic applications. But there are challenges for translating a neural prosthesis from a laboratory setting to a clinical environment. To achieve a fully functional prosthetic device with maximum user compliance and acceptance, these factors need to be addressed and taken into consideration. Three challenges in developing robust neural decoding systems were addressed by exploring neural variability in the peripheral nervous system for dexterous finger movements, feature selection methods based on clinically relevant metrics and a novel method for decoding dexterous finger movements based on ensemble methods.Dissertation/ThesisDoctoral Dissertation Bioengineering 201
Recommended from our members
Understanding Affected Muscle Activity in Children with Unilateral Congenital Below-Elbow Deficiency for Intuitive Control of Dexterous Prostheses
There are many complex factors that will affect whether children with a unilateral congenital below-elbow deficiency (UCBED) will use a prosthetic limb to interact within their environment. Children face higher rates of prosthesis abandonment at 35-45%, compared to adults at 23-26%. Ultimately, for a child to wear and use their prosthesis, it must facilitate the effective performance of daily tasks and promote healthy social interactions. Although beginning to emerge, multiarticulate upper limb prostheses for children remain sparse despite the continued advancement of mechatronic technologies that have benefited adults with upper limb amputations. In contrast, pediatric devices typically provide a single open-close grasp (if a grasping function is available at all) and often offer non-anthropomorphic appearances, falling short of meeting the criteria essential to prosthesis adoption. Moreover, this population presents unique challenges, as they were born never having actuated a hand, and with forearm musculature that never fully developed–a stark departure from those with acquired limb absence. Due to the lack of investigation into how children with UCBED actuate their muscles coupled with the limited advancement in pediatric upper limb devices, the effective translation of dexterous prostheses remains a prominent issue. This dissertation builds the fundamental groundwork necessary for the effective translation of dexterous prosthetic hands for children with UCBED. It begins with an examination of how typically developing children use their hands to interact within their environment to inform dexterous device development (Chapter 3). Here we found that children, like adults, use a small subset of hand movements to perform object manipulation in home settings. Subsequently, a child-sized dexterous prosthetic hand was developed to serve as a dedicated research platform (Chapter 4). A thorough benchmark of this research platform was performed to validate its functional grasping ability and it was shown to be a robust device within a research environment. Prior to using this device, a cohort of children with UCBED were recruited, and an in-depth analysis of state-of-the-art prosthetic control, namely surface electromyography (sEMG) as a measure of affected muscle electrical activity, was conducted (Chapter 5). Upon investigation, participants exhibited a measurable degree of consistency and repeatability of their affected musculature as obtained through sEMG when they attempted missing hand and wrist movements. Furthermore, through tuning features, i.e., sEMG characteristics, and classification algorithms, we found a novel generalized feature set that provided increased classification to decode hand motor intent (Chapter 6). Moreover, we benchmarked the real-time performance of these children to execute hand movements, adding a translational dimension to our findings (Chapter 7). This forms a crucial foundation for understanding muscle actuation and use of advanced prostheses among children with UCBED.
Through this work, we have laid the foundation to understand the capacity of children with UCBED to control their affected musculature. This begins to address the translational aspect of child-size dexterous upper limb devices and has the potential to remove barriers to device acceptance
Pattern recognition-based real-time myoelectric control for anthropomorphic robotic systems : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Mechatronics at Massey University, Manawatū, New Zealand
All copyrighted Figures have been removed but may be accessed via their source cited in their respective captions.Advanced human-computer interaction (HCI) or human-machine interaction (HMI) aims to help
humans interact with computers smartly. Biosignal-based technology is one of the most promising
approaches in developing intelligent HCI systems. As a means of convenient and non-invasive
biosignal-based intelligent control, myoelectric control identifies human movement intentions from
electromyogram (EMG) signals recorded on muscles to realise intelligent control of robotic systems.
Although the history of myoelectric control research has been more than half a century, commercial
myoelectric-controlled devices are still mostly based on those early threshold-based methods. The
emerging pattern recognition-based myoelectric control has remained an active research topic in
laboratories because of insufficient reliability and robustness. This research focuses on pattern
recognition-based myoelectric control. Up to now, most of effort in pattern recognition-based
myoelectric control research has been invested in improving EMG pattern classification accuracy.
However, high classification accuracy cannot directly lead to high controllability and usability for
EMG-driven systems. This suggests that a complete system that is composed of relevant modules,
including EMG acquisition, pattern recognition-based gesture discrimination, output equipment and its
controller, is desirable and helpful as a developing and validating platform that is able to closely emulate
real-world situations to promote research in myoelectric control.
This research aims at investigating feasible and effective EMG signal processing and pattern
recognition methods to extract useful information contained in EMG signals to establish an intelligent,
compact and economical biosignal-based robotic control system. The research work includes in-depth
study on existing pattern recognition-based methodologies, investigation on effective EMG signal
capturing and data processing, EMG-based control system development, and anthropomorphic robotic
hand design. The contributions of this research are mainly in following three aspects:
Developed precision electronic surface EMG (sEMG) acquisition methods that are able to
collect high quality sEMG signals. The first method was designed in a single-ended signalling
manner by using monolithic instrumentation amplifiers to determine and evaluate the analog
sEMG signal processing chain architecture and circuit parameters. This method was then
evolved into a fully differential analog sEMG detection and collection method that uses
common commercial electronic components to implement all analog sEMG amplification and
filtering stages in a fully differential way. The proposed fully differential sEMG detection and collection method is capable of offering a higher signal-to-noise ratio in noisy environments
than the single-ended method by making full use of inherent common-mode noise rejection
capability of balanced signalling. To the best of my knowledge, the literature study has not
found similar methods that implement the entire analog sEMG amplification and filtering chain
in a fully differential way by using common commercial electronic components.
Investigated and developed a reliable EMG pattern recognition-based real-time gesture
discrimination approach. Necessary functional modules for real-time gesture discrimination
were identified and implemented using appropriate algorithms. Special attention was paid to
the investigation and comparison of representative features and classifiers for improving
accuracy and robustness. A novel EMG feature set was proposed to improve the performance
of EMG pattern recognition.
Designed an anthropomorphic robotic hand construction methodology for myoelectric control
validation on a physical platform similar to in real-world situations. The natural anatomical
structure of the human hand was imitated to kinematically model the robotic hand. The
proposed robotic hand is a highly underactuated mechanism, featuring 14 degrees of freedom
and three degrees of actuation.
This research carried out an in-depth investigation into EMG data acquisition and EMG signal pattern
recognition. A series of experiments were conducted in EMG signal processing and system
development. The final myoelectric-controlled robotic hand system and the system testing confirmed
the effectiveness of the proposed methods for surface EMG acquisition and human hand gesture
discrimination. To verify and demonstrate the proposed myoelectric control system, real-time tests were
conducted onto the anthropomorphic prototype robotic hand. Currently, the system is able to identify
five patterns in real time, including hand open, hand close, wrist flexion, wrist extension and the rest
state. With more motion patterns added in, this system has the potential to identify more hand
movements. The research has generated a few journal and international conference publications
Kohti yläraaja-proteesien ohjausta pintaelektromyografialla
The loss of an upper limb is a life-altering accident which makes everyday life more difficult.A multifunctional prosthetic hand with an user-friendly control interface may significantlyimprove the life quality of amputees. However, many amputees do not use their prosthetichand regularly because of its low functionality, and low controllability. This situation callsfor the development of versatile prosthetic limbs that allow amputees to perform tasks thatare necessary for activities of daily living.
The non-pattern based control scheme of the commercial state-of art prosthesis is rather poorand non-natural. Usually, a pair of muscles is used to control one degree of freedom. Apromising alternative to the conventional control methods is the pattern-recognition-basedcontrol that identifies different intended hand postures of the prosthesis by utilizing theinformation of the surface electromyography (sEMG) signals. Therefore, the control of theprosthesis becomes natural and easy.
The objective of this thesis was to find the features that yield the highest classificationaccuracy in identifying 7 classes of hand postures in the context of Linear DiscriminantClassifier. The sEMG signals were measured on the skin surface of the forearm of the 8 ablebodiedsubjects. The following features were investigated: 16 time-domain features, twotime-serial-domain features, the Fast Fourier Transform (FFT), and the Discrete WaveletTransform (DWT). The second objective of this thesis was to study the effect of the samplingrate to the classification accuracy. A preprocessing technique, Independent ComponentAnalysis (ICA), was also shortly examined. The classification was based on the steady statesignal. The signal processing, features, and classification were implemented with Matlab.
The results of this study suggest that DWT and FFT did not outperform the simple andcomputationally efficient time domain features in the classification accuracy. Thus, at least innoise free environment, the high classification accuracy (> 90 %) can be achieved with asmall number of simple TD features. A more reliable control may be achieved if the featuresare selected individually of a subset of the effective features. Using the sampling rate of 400Hz instead of commonly used 1 kHz may not only save the data processing time and thememory of the prosthesis controller but also slightly improve the classification accuracy.ICA was not found to improve the classification accuracy, which may be because themeasurement channels were placed relatively far from each other.Yläraaja-amputaatio vaikuttaa suuresti päivittäiseen elämään. Helposti ohjattavalla toiminnallisillaproteeseilla amputoitujen henkilöiden elämänlaatua voitaisiin parantaa merkittävästi.Suurin osa amputoiduista henkilöistä ei kuitenkaan käytä proteesiaan säännöllisesti proteesinvähäisten toimintojen ja vaikean ohjattavuuden vuoksi. Olisikin tärkeää kehittää helpostiohjattava ja riittävästi toimintoja sisältävä proteesi, joka mahdollistaisi päivittäisessäelämässä välttämättömien tehtävien suorittamisen.
Markkinoilla olevat lihassähköiset yläraajaproteesit perustuvat yksinkertaiseen hahmontunnistustahyödyntämättömään ohjaukseen, jossa lihasparilla ohjataan yleensä yhtä proteesinvapausastetta. Lupaava vaihtoehto perinteisille ohjausmenetelmille on hahmontunnistukseenpohjautuva ohjaus. Se tunnistaa käyttäjän käden asennot käsivarren iholta mitatun lihassähkösignaalinsisältämän informaation avulla mahdollistaen helpon ja luonnollisen ohjauksen.
Tämän diplomityön tavoitteena oli löytää piirteet, jolla seitsemän erilaista käden asentoa pystytäänluokittelemaan mahdollisimman tarkasti lineaarisella diskriminantti luokittelijalla.Lihassähkösignaalit mitattiin kahdeksan ei-amputoidun koehenkilön käsivarresta ihon pinnallekiinnitetyillä elektrodeilla. Työssä vertailtiin seuraavia piirteitä: 16 aika-alueen piirrettä,kaksi aikasarja-alueen piirrettä, nopea Fourier-muunnos (FFT), diskreetti Aallokemuunnos(DWT). Työn toinen tavoite oli tutkia näytteenottotaajuuden vaikutusta luokittelutarkkuuteen.Myös esiprosessointia riippumattomien komponenttien analyysillä tutkittiinlyhyesti. Luokittelu tehtiin staattisen lihassupistuksen aikana mitatun signaalin perusteella.Signaalin prosessointi, piirteet ja luokittelu toteutettiin Matlabilla.
Tämän tutkimuksen tulokset osoittivat, etteivät diskreetti Aalloke-muunnos ja nopea Fouriermuunnosyllä laskennallisesti tehokkaampia aika-alueen piirteitä parempaan luokittelutarkkuuteen.Pienellä määrällä yksinkertaisia aika-alueen piirteitä voidaan saavuttaa hyvä luokittelutarkkuus(>90 %). Luokittelutarkkuutta voitaneen edelleen parantaa valitsemalla optimaalisetpiirteet yksilöllisesti pienestä joukosta hyviksi havaittuja piirteitä. Käyttämällä 400Hz:n näytteenottotaajuutta yleisesti käytetyn 1 kHz:n sijasta, voidaan sekä säästää prosessointiaikaaja proteesin prosessorin muistia että myös parantaa hieman luokittelutarkkuutta.Esiprosessointi riippumattomien komponenttien analyysillä ei parantanut luokittelutarkkuutta,mikä johtunee siitä, että mittauskanavat olivat suhteellisen kaukana toisistaan
- …