30 research outputs found

    Comparison between low-cost and high-end sEMG sensors for the control of a transradial myoelectric prosthesis

    Get PDF
    Tese de mestrado integrado em Engenharia Biomédica e Biofísica, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2017A amputação é algo pode mudar completamente a vida de qualquer indivíduo. A autonomia para executar tarefas do quotidiano, que a maioria de nós toma como garantidas, é drasticamente diminuída. Para além da dificuldade acrescida neste tipo de tarefas, a autoconfiança do individuo também sofre um duro decréscimo, podendo até originar situações de depressão. Por todas estas razões, a qualidade de vida de um amputado transradial é severamente afetada de forma negativa. Felizmente, atualmente já existem vários tipos de soluções prostéticas para tentar lidar com todos os obstáculos consequentes de uma amputação. Entre estas, encontram-se as próteses mioelétricas. Este tipo de próteses pode recorrer ao uso de algoritmos de reconhecimento de padrões para associar certos padrões observados em sinais de sEMG provenientes do coto a diferentes gestos de mão, oferecendo a possibilidade ao amputado transradial de restaurar alguma da sua autonomia utilizando um dispositivo com funcionalidades semelhantes à mão humana. Porém, existem alguns obstáculos relacionados com a acessibilidade destes dispositivos, mais especificamente, o preço. Atualmente, os preços das próteses mioelétricas comercialmente disponíveis são demasiados elevados, o que constitui um grande contratempo para indivíduos economicamente desfavorecidos que vivem com amputação transradial. Existe, portanto, a necessidade de diminuir os custos de produção e, consequentemente, o preço de mercado. No entanto, já existem alguns esforços a serem efetuados para tentar diminuir estes valores, tal como a impressão de algumas componentes em 3D. Para atingir este fim, também pode ser possível o uso de sensores de sEMG de baixo custo, ao invés de sensores sEMG de ponta. Porém, é necessário assegurar que a performance de controlo de uma prótese mioelétrica atingida pelo uso de sensores de baixo custo possa ser tão boa, ou superior à atingida pelo uso de sensores de ponta. Este é precisamente o grande foco desta dissertação. Para efetuar esta comparação, recorreu-se ao uso do Myo Armband e sensores da marca OttoBock. O Myo Armband é uma bracelete comercial de baixo custo que permite o controlo de aplicações multimédia e contém oito sensores de sEMG. Por outro lado, os sensores da OttoBock são os elétrodos de eleição para aplicações prostéticas. Estes dois tipos de sensores foram aplicados em dois sistemas sEMG distintos e duas experiências foram efetuadas de modo a avaliar a performance de cada um. Na primeira experiência foram efetuadas medições de sEMG nos antebraços de nove sujeitos saudáveis, com uso de ambos os sistemas. Foram usados diferentes algoritmos de reconhecimento de padrões para classificar segmentos do sinal sEMG correspondente a quatro gestos de mão diferentes. Em cada um dos sistemas foram usados cinco sensores. A experiência foi dividida em duas sessões. O protocolo seguido em cada uma das sessões foi exatamente o mesmo e a aquisição de dados foi realizada de forma contínua. Foi pedido a cada um dos sujeitos para visualizarem um vídeo e replicar cada um dos gestos mostrados neste mesmo. Cada um dos quatro gestos selecionados foi repetido 10 vezes, durante 10 segundos. Este procedimento foi repetido para cada um dos sistemas em cada uma das sessões. Embora cada gesto tenha sido registrado durante 10 segundos, apenas os últimos 6 segundos foram usados para classificação. Isto foi feito com o intuito de usar apenas o sinal de sEMG estável e não o transiente, que é originado pelo movimento do sujeito entre diferentes gestos. Diferentes técnicas de processamento de sinal e de extração de features foram aplicadas aos sinais adquiridos. Os dados obtidos, por sua vez, foram classificados por seis algoritmos diferentes, incluindo Linear Discriminant Analysis, Naïve Bayes, k Nearest Neighbours e três variações de Support Vector Machines. Esta experiência teve, portanto, o propósito de avaliar quais poderiam ser as combinações mais favoráveis entre diferentes técnicas de processamento de sinal e classificadores, de forma a obter a máxima precisão de classificação possível. Para avaliar as precisões calculadas, foram utilizados dois métodos de avaliação: 10-fold cross-validation e treino-teste. Os testes estatísticos efetuados aos resultados adquiridos demonstraram a inexistência de quaisquer diferenças significativas entre ambos os sistemas, o que valida a hipótese principal proposta por esta dissertação. No entanto, é necessário validar esta mesma hipótese com dados extraídos de amputados transradiais, os utilizadores finais deste tipo de sistemas. Na segunda experiência, as medições de sEMG foram efetuadas a doze amputados transradiais e a doze sujeitos saudáveis. Nesta experiência, em semelhança à primeira, também se realizaram duas sessões com protocolo igual. Contundo, comparativamente à experiência anterior, o protocolo usado sofreu algumas alterações. O número de sensores usados em cada um dos sistemas foi incrementado para oito e o número de gestos de mão foi aumentado para cinco. Os dados foram adquiridos de forma descontínua e a duração de cada aquisição realizada para cada gesto foi alterada para 2 segundos, de forma a obter apenas o sinal sEMG estável. Foram feitas 15 aquisições para cada um dos cinco gestos de mão, o que perfaz um total de 75 aquisições. As combinações de técnicas de processamento de sinal e classificadores usados nesta experiência foram selecionados de acordo com os resultados da primeira. No total, foram usadas quatro diferentes combinações de técnicas de processamento de sinal, retiradas das seis usadas na experiência anterior, e dois classificadores, uma das variações da Support Vector Machine e k Nearest Neighbours. As precisões calculadas voltaram a ser avaliadas novamente por meio de 10-fold cross-validation e de avaliação treino-teste. Os resultados obtidos demonstraram a inexistência de diferenças significativas entre as precisões adquiridas para cada um dos sistemas, exceto segundo os resultados da cross-validation. Neste caso, o sistema da OttoBock permitiu o cálculo de precisões superiores às obtidas pelo sistema da Myo Armband. Contundo, as precisões deste último demonstraram ser bastante competitivas. Nos resultados adquiridos, verificaram-se valores de precisão mais elevados no caso dos sujeitos saudáveis, em ambos os sistemas. Isto seria algo previsível, já que a não utilização diária do membro fantasma (a sensação de que membro amputado está ainda presente) leva a que o amputado se “esqueça” de como efetuava certos gestos com a mão que foi amputada. De um modo geral, pode-se afirmar que não se verificaram diferenças significativas entre os resultados obtidos em ambos os sistemas, o que valida a hipótese principal proposta por esta dissertação. De facto, os sensores de baixo custo usados permitiram resultados de classificação tão bons como os obtidos com o uso de sensores de ponta. Contudo, é de notar que isto é apenas possível com uso de algumas técnicas de processamento ao sinal aos dados obtidos pelos sensores da Myo, nomeadamente a aplicação de um envelope e de um filtro passa-baixo com uma frequência de corte de 1 Hz. Sem qualquer tipo de processamento, os resultados obtidos com estes sensores foram bastante fracos. Por outro lado, os sensores da OttoBock, mesmo sem qualquer tipo de processamento de sinal, permitiram resultados bastante elevados, o que se deve ao facto de produzirem um sinal previamente filtrado, com envelope e amplificado, ou seja, um sinal de alta qualidade. Considerando os resultados obtidos, é de facto possível que a aplicação de sensores de baixo custo a um sistema de controlo de uma prótese mioelétrica possa permitir uma performance tão boa como a oferecida por sensores de ponta. Contudo, isto é apenas possível se o processamento de sinal usado for apropriado, assim como o classificador escolhido. Em suma, é possível a substituição dos sensores atualmente usados em aplicações prostéticas por sensores com um custo mais reduzido, de modo a obter dispositivos mais económicos sem comprometer a qualidade do seu funcionamento. No entanto, antes destes sensores serem aplicados numa prótese mioelétrica, é necessário testar o sistema em tempo real e desenhar uma estratégia de controlo robusta, que permita uma boa comunicação entre as intenções do utilizador e as funcionalidades inerentes da prótese.The loss of a hand due to amputation can completely change anyone’s life. The autonomy to perform daily life tasks, which most of us take for granted, is drastically reduced, as well as one’s quality of life. Fortunately, the use of a myoelectric prosthesis can help in overcoming such problems a transradial amputee must face every day. However, the current cost of such devices can limit its accessibility to economically less favored people. In this dissertation, it is hypothesized that low-cost sensors can have a performance in controlling a myoelectric prosthesis as good as, or even better than the high-end sensors that are currently used in such applications. If this hypothesis can be validated, it may help in decreasing the costs of a myoelectric prosthesis and making it more accessible for the final user, the transradial amputee. To compare both types of sensors, two experimental sessions were performed. The first one was performed only on able-bodied subjects and it had the objective of selecting the best combination of signal processing techniques and classifiers in order to use on the obtained sEMG signals. In the second experiment, sEMG measurements were performed on both able-bodied and transradial amputated subjects. The signal processing techniques and classifiers that allowed to obtain the best results in the first experiment were used to classify the acquired data from all the subjects. Overall, the accuracies calculated with the usage of the low-cost sensors, using some of the signal processing techniques, proved not to be significantly different from the ones obtained with the usage of the high-end sensors. This indicates that the usage of low-cost sensors in systems to control a myoelectrical prosthesis might indeed provide a performance as efficient as high-end sensor. Besides, it may provide the possibility to lower the overall cost of the currently available devices

    Biceps brachii synergy and its contribution to target reaching tasks within a virtual cube

    Full text link
    Ces dernières années, des travaux importants ont été observés dans le développement du contrôle prothétique afin d'aider les personnes amputées du membre supérieur à améliorer leur qualité de vie au quotidien. Certaines prothèses myoélectriques modernes des membres supérieurs disponibles dans le commerce ont de nombreux degrés de liberté et nécessitent de nombreux signaux de contrôle pour réaliser plusieurs tâches fréquemment utilisées dans la vie quotidienne. Pour obtenir plusieurs signaux de contrôle, de nombreux muscles sont requis mais pour les personnes ayant subi une amputation du membre supérieur, le nombre de muscles disponibles est plus ou moins réduit selon le niveau de l’amputation. Pour accroître le nombre de signaux de contrôle, nous nous sommes intéressés au biceps brachial, vu qu’anatomiquement il est formé de 2 chefs et que de la présence de compartiments a été observée sur sa face interne. Physiologiquement, il a été trouvé que les unités motrices du biceps sont activées à différents endroits du muscle lors de la production de diverses tâches fonctionnelles. De plus, il semblerait que le système nerveux central puisse se servir de la synergie musculaire pour arriver à facilement produire plusieurs mouvements. Dans un premier temps on a donc identifié que la synergie musculaire était présente chez le biceps de sujets normaux et on a montré que les caractéristiques de cette synergie permettaient d’identifier la posture statique de la main lorsque les signaux du biceps avaient été enregistrés. Dans un deuxième temps, on a réussi à démontrer qu’il était possible, dans un cube présenté sur écran, à contrôler la position d’une sphère en vue d’atteindre diverses cibles en utilisant la synergie musculaire du biceps. Les techniques de classification utilisées pourraient servir à faciliter le contrôle des prothèses myoélectriques.In recent years, important work has been done in the development of prosthetic control to help upper limb amputees improve their quality of life on a daily basis. Some modern commercially available upper limb myoelectric prostheses have many degrees of freedom and require many control signals to perform several tasks commonly used in everyday life. To obtain several control signals, many muscles are required, but for people with upper limb amputation, the number of muscles available is more or less reduced, depending on the level of amputation. To increase the number of control signals, we were interested in the biceps brachii, since it is anatomically composed of 2 heads and the presence of compartments was observed on its internal face. Physiologically, it has been found that the motor units of the biceps are activated at different places of the muscle during production of various functional tasks. In addition, it appears that the central nervous system can use muscle synergy to easily produce multiple movements. In this research, muscle synergy was first identified to be present in the biceps of normal subjects, and it was shown that the characteristics of this synergy allowed the identification of static posture of the hand when the biceps signals had been recorded. In a second investigation, we demonstrated that it was possible in a virtual cube presented on a screen to control online the position of a sphere to reach various targets by using muscle synergy of the biceps. Classification techniques have been used to improve the classification of muscular synergy features, and these classification techniques can be integrated with control algorithm that produces dynamic movement of myoelectric prostheses to facilitate the training of prosthetic control

    Immersive augmented reality system for the training of pattern classification control with a myoelectric prosthesis

    Get PDF
    Background!#!Hand amputation can have a truly debilitating impact on the life of the affected person. A multifunctional myoelectric prosthesis controlled using pattern classification can be used to restore some of the lost motor abilities. However, learning to control an advanced prosthesis can be a challenging task, but virtual and augmented reality (AR) provide means to create an engaging and motivating training.!##!Methods!#!In this study, we present a novel training framework that integrates virtual elements within a real scene (AR) while allowing the view from the first-person perspective. The framework was evaluated in 13 able-bodied subjects and a limb-deficient person divided into intervention (IG) and control (CG) groups. The IG received training by performing simulated clothespin task and both groups conducted a pre- and posttest with a real prosthesis. When training with the AR, the subjects received visual feedback on the generated grasping force. The main outcome measure was the number of pins that were successfully transferred within 20 min (task duration), while the number of dropped and broken pins were also registered. The participants were asked to score the difficulty of the real task (posttest), fun-factor and motivation, as well as the utility of the feedback.!##!Results!#!The performance (median/interquartile range) consistently increased during the training sessions (4/3 to 22/4). While the results were similar for the two groups in the pretest, the performance improved in the posttest only in IG. In addition, the subjects in IG transferred significantly more pins (28/10.5 versus 14.5/11), and dropped (1/2.5 versus 3.5/2) and broke (5/3.8 versus 14.5/9) significantly fewer pins in the posttest compared to CG. The participants in IG assigned (mean ± std) significantly lower scores to the difficulty compared to CG (5.2 ± 1.9 versus 7.1 ± 0.9), and they highly rated the fun factor (8.7 ± 1.3) and usefulness of feedback (8.5 ± 1.7).!##!Conclusion!#!The results demonstrated that the proposed AR system allows for the transfer of skills from the simulated to the real task while providing a positive user experience. The present study demonstrates the effectiveness and flexibility of the proposed AR framework. Importantly, the developed system is open source and available for download and further development

    On the Utility of Representation Learning Algorithms for Myoelectric Interfacing

    Get PDF
    Electrical activity produced by muscles during voluntary movement is a reflection of the firing patterns of relevant motor neurons and, by extension, the latent motor intent driving the movement. Once transduced via electromyography (EMG) and converted into digital form, this activity can be processed to provide an estimate of the original motor intent and is as such a feasible basis for non-invasive efferent neural interfacing. EMG-based motor intent decoding has so far received the most attention in the field of upper-limb prosthetics, where alternative means of interfacing are scarce and the utility of better control apparent. Whereas myoelectric prostheses have been available since the 1960s, available EMG control interfaces still lag behind the mechanical capabilities of the artificial limbs they are intended to steer—a gap at least partially due to limitations in current methods for translating EMG into appropriate motion commands. As the relationship between EMG signals and concurrent effector kinematics is highly non-linear and apparently stochastic, finding ways to accurately extract and combine relevant information from across electrode sites is still an active area of inquiry.This dissertation comprises an introduction and eight papers that explore issues afflicting the status quo of myoelectric decoding and possible solutions, all related through their use of learning algorithms and deep Artificial Neural Network (ANN) models. Paper I presents a Convolutional Neural Network (CNN) for multi-label movement decoding of high-density surface EMG (HD-sEMG) signals. Inspired by the successful use of CNNs in Paper I and the work of others, Paper II presents a method for automatic design of CNN architectures for use in myocontrol. Paper III introduces an ANN architecture with an appertaining training framework from which simultaneous and proportional control emerges. Paper Iv introduce a dataset of HD-sEMG signals for use with learning algorithms. Paper v applies a Recurrent Neural Network (RNN) model to decode finger forces from intramuscular EMG. Paper vI introduces a Transformer model for myoelectric interfacing that do not need additional training data to function with previously unseen users. Paper vII compares the performance of a Long Short-Term Memory (LSTM) network to that of classical pattern recognition algorithms. Lastly, paper vIII describes a framework for synthesizing EMG from multi-articulate gestures intended to reduce training burden

    Deep Vision for Prosthetic Grasp

    Get PDF
    Ph. D. ThesisThe loss of the hand can limit the natural ability of individuals in grasping and manipulating objects and affect their quality of life. Prosthetic hands can aid the users in overcoming these limitations and regaining their ability. Despite considerable technical advances, the control of commercial hand prostheses is still limited to few degrees of freedom. Furthermore, switching a prosthetic hand into a desired grip mode can be tiring. Therefore, the performance of hand prostheses should improve greatly. The main aim of this thesis is to improve the functionality, performance and flexibility of current hand prostheses by augmentation of current commercial hand prosthetics with a vision module. By offering the prosthesis the capacity to see objects, appropriate grip modes can be determined autonomously and quickly. Several deep learning-based approaches were designed in this thesis to realise such a vision-reinforced prosthetic system. Importantly, the user, interacting with this learning structure, may act as a supervisor to accept or correct the suggested grasp. Amputee participants evaluated the designed system and provided feedback. The following objectives for prosthetic hands were met: 1. Chapter 3: Design, implementation and real-time testing of a semi-autonomous vision-reinforced prosthetic control structure, empowered with a baseline convolutional neural network deep learning structure. 2. Chapter 4: Development of advanced deep learning structure to simultaneously detect and estimate grasp maps for unknown objects, in presence of ambiguity. 3. Chapter 5: Design and development of several deep learning set-ups for concurrent depth and grasp map as well as human grasp type prediction. Publicly available datasets, consisting of common graspable objects, namely Amsterdam library of object images (ALOI) and Cornell grasp library were used within this thesis. Moreover, to have access to real data, a small dataset of household objects was gathered for the experiments, that is Newcastle Grasp Library.EPSRC, School of Engineering Newcastle University

    Biomechatronics: Harmonizing Mechatronic Systems with Human Beings

    Get PDF
    This eBook provides a comprehensive treatise on modern biomechatronic systems centred around human applications. A particular emphasis is given to exoskeleton designs for assistance and training with advanced interfaces in human-machine interaction. Some of these designs are validated with experimental results which the reader will find very informative as building-blocks for designing such systems. This eBook will be ideally suited to those researching in biomechatronic area with bio-feedback applications or those who are involved in high-end research on manmachine interfaces. This may also serve as a textbook for biomechatronic design at post-graduate level

    Fused mechanomyography and inertial measurement for human-robot interface

    Get PDF
    Human-Machine Interfaces (HMI) are the technology through which we interact with the ever-increasing quantity of smart devices surrounding us. The fundamental goal of an HMI is to facilitate robot control through uniting a human operator as the supervisor with a machine as the task executor. Sensors, actuators, and onboard intelligence have not reached the point where robotic manipulators may function with complete autonomy and therefore some form of HMI is still necessary in unstructured environments. These may include environments where direct human action is undesirable or infeasible, and situations where a robot must assist and/or interface with people. Contemporary literature has introduced concepts such as body-worn mechanical devices, instrumented gloves, inertial or electromagnetic motion tracking sensors on the arms, head, or legs, electroencephalographic (EEG) brain activity sensors, electromyographic (EMG) muscular activity sensors and camera-based (vision) interfaces to recognize hand gestures and/or track arm motions for assessment of operator intent and generation of robotic control signals. While these developments offer a wealth of future potential their utility has been largely restricted to laboratory demonstrations in controlled environments due to issues such as lack of portability and robustness and an inability to extract operator intent for both arm and hand motion. Wearable physiological sensors hold particular promise for capture of human intent/command. EMG-based gesture recognition systems in particular have received significant attention in recent literature. As wearable pervasive devices, they offer benefits over camera or physical input systems in that they neither inhibit the user physically nor constrain the user to a location where the sensors are deployed. Despite these benefits, EMG alone has yet to demonstrate the capacity to recognize both gross movement (e.g. arm motion) and finer grasping (e.g. hand movement). As such, many researchers have proposed fusing muscle activity (EMG) and motion tracking e.g. (inertial measurement) to combine arm motion and grasp intent as HMI input for manipulator control. However, such work has arguably reached a plateau since EMG suffers from interference from environmental factors which cause signal degradation over time, demands an electrical connection with the skin, and has not demonstrated the capacity to function out of controlled environments for long periods of time. This thesis proposes a new form of gesture-based interface utilising a novel combination of inertial measurement units (IMUs) and mechanomyography sensors (MMGs). The modular system permits numerous configurations of IMU to derive body kinematics in real-time and uses this to convert arm movements into control signals. Additionally, bands containing six mechanomyography sensors were used to observe muscular contractions in the forearm which are generated using specific hand motions. This combination of continuous and discrete control signals allows a large variety of smart devices to be controlled. Several methods of pattern recognition were implemented to provide accurate decoding of the mechanomyographic information, including Linear Discriminant Analysis and Support Vector Machines. Based on these techniques, accuracies of 94.5% and 94.6% respectively were achieved for 12 gesture classification. In real-time tests, accuracies of 95.6% were achieved in 5 gesture classification. It has previously been noted that MMG sensors are susceptible to motion induced interference. The thesis also established that arm pose also changes the measured signal. This thesis introduces a new method of fusing of IMU and MMG to provide a classification that is robust to both of these sources of interference. Additionally, an improvement in orientation estimation, and a new orientation estimation algorithm are proposed. These improvements to the robustness of the system provide the first solution that is able to reliably track both motion and muscle activity for extended periods of time for HMI outside a clinical environment. Application in robot teleoperation in both real-world and virtual environments were explored. With multiple degrees of freedom, robot teleoperation provides an ideal test platform for HMI devices, since it requires a combination of continuous and discrete control signals. The field of prosthetics also represents a unique challenge for HMI applications. In an ideal situation, the sensor suite should be capable of detecting the muscular activity in the residual limb which is naturally indicative of intent to perform a specific hand pose and trigger this post in the prosthetic device. Dynamic environmental conditions within a socket such as skin impedance have delayed the translation of gesture control systems into prosthetic devices, however mechanomyography sensors are unaffected by such issues. There is huge potential for a system like this to be utilised as a controller as ubiquitous computing systems become more prevalent, and as the desire for a simple, universal interface increases. Such systems have the potential to impact significantly on the quality of life of prosthetic users and others.Open Acces

    Machine Learning for Hand Gesture Classification from Surface Electromyography Signals

    Get PDF
    Classifying hand gestures from Surface Electromyography (sEMG) is a process which has applications in human-machine interaction, rehabilitation and prosthetic control. Reduction in the cost and increase in the availability of necessary hardware over recent years has made sEMG a more viable solution for hand gesture classification. The research challenge is the development of processes to robustly and accurately predict the current gesture based on incoming sEMG data. This thesis presents a set of methods, techniques and designs that improve upon evaluation of, and performance on, the classification problem as a whole. These are brought together to set a new baseline for the potential classification. Evaluation is improved by careful choice of metrics and design of cross-validation techniques that account for data bias caused by common experimental techniques. A landmark study is re-evaluated with these improved techniques, and it is shown that data augmentation can be used to significantly improve upon the performance using conventional classification methods. A novel neural network architecture and supporting improvements are presented that further improve performance and is refined such that the network can achieve similar performance with many fewer parameters than competing designs. Supporting techniques such as subject adaptation and smoothing algorithms are then explored to improve overall performance and also provide more nuanced trade-offs with various aspects of performance, such as incurred latency and prediction smoothness. A new study is presented which compares the performance potential of medical grade electrodes and a low-cost commercial alternative showing that for a modest-sized gesture set, they can compete. The data is also used to explore data labelling in experimental design and to evaluate the numerous aspects of performance that must be traded off

    Addressing the challenges posed by human machine interfaces based on force sensitive resistors for powered prostheses

    Get PDF
    Despite the advancements in the mechatronics aspect of prosthetic devices, prostheses control still lacks an interface that satisfies the needs of the majority of users. The research community has put great effort into the advancements of prostheses control techniques to address users’ needs. However, most of these efforts are focused on the development and assessment of technologies in the controlled environments of laboratories. Such findings do not fully transfer to the daily application of prosthetic systems. The objectives of this thesis focus on factors that affect the use of Force Myography (FMG) controlled prostheses in practical scenarios. The first objective of this thesis assessed the use of FMG as an alternative or synergist Human Machine Interface (HMI) to the more traditional HMI, i.e. surface Electromyography (sEMG). The assessment for this study was conducted in conditions that are relatively close to the real use case of prosthetic applications. The HMI was embedded in the custom prosthetic prototype that was developed for the pilot participant of the study using an off-the-shelf prosthetic end effector. Moreover, prostheses control was assessed as the user moved their limb in a dynamic protocol.The results of the aforementioned study motivated the second objective of this thesis: to investigate the possibility of reducing the complexity of high density FMG systems without sacrificing classification accuracies. This was achieved through a design method that uses a high density FMG apparatus and feature selection to determine the number and location of sensors that can be eliminated without significantly sacrificing the system’s performance. The third objective of this thesis investigated two of the factors that contribute to increased errors in force sensitive resistor (FSR) signals used in FMG controlled prostheses: bending of force sensors and variations in the volume of the residual limb. Two studies were conducted that proposed solutions to mitigate the negative impact of these factors. The incorporation of these solutions into prosthetic devices is discussed in these studies.It was demonstrated that FMG is a promising HMI for prostheses control. The facilitation of pattern recognition with FMG showed potential for intuitive prosthetic control. Moreover, a method for the design of a system that can determine the required number of sensors and their locations on each individual to achieve a simpler system with comparable performance to high density FMG systems was proposed and tested. The effects of the two factors considered in the third objective were determined. It was also demonstrated that the proposed solutions in the studies conducted for this objective can be used to increase the accuracy of signals that are commonly used in FMG controlled prostheses
    corecore