18 research outputs found

    Biosignal‐based human–machine interfaces for assistance and rehabilitation : a survey

    Get PDF
    As a definition, Human–Machine Interface (HMI) enables a person to interact with a device. Starting from elementary equipment, the recent development of novel techniques and unobtrusive devices for biosignals monitoring paved the way for a new class of HMIs, which take such biosignals as inputs to control various applications. The current survey aims to review the large literature of the last two decades regarding biosignal‐based HMIs for assistance and rehabilitation to outline state‐of‐the‐art and identify emerging technologies and potential future research trends. PubMed and other databases were surveyed by using specific keywords. The found studies were further screened in three levels (title, abstract, full‐text), and eventually, 144 journal papers and 37 conference papers were included. Four macrocategories were considered to classify the different biosignals used for HMI control: biopotential, muscle mechanical motion, body motion, and their combinations (hybrid systems). The HMIs were also classified according to their target application by considering six categories: prosthetic control, robotic control, virtual reality control, gesture recognition, communication, and smart environment control. An ever‐growing number of publications has been observed over the last years. Most of the studies (about 67%) pertain to the assistive field, while 20% relate to rehabilitation and 13% to assistance and rehabilitation. A moderate increase can be observed in studies focusing on robotic control, prosthetic control, and gesture recognition in the last decade. In contrast, studies on the other targets experienced only a small increase. Biopotentials are no longer the leading control signals, and the use of muscle mechanical motion signals has experienced a considerable rise, especially in prosthetic control. Hybrid technologies are promising, as they could lead to higher performances. However, they also increase HMIs’ complex-ity, so their usefulness should be carefully evaluated for the specific application

    Design of a low-cost sensor matrix for use in human-machine interactions on the basis of myographic information

    Get PDF
    Myographic sensor matrices in the field of human-machine interfaces are often poorly developed and not pushing the limits in terms of a high spatial resolution. Many studies use sensor matrices as a tool to access myographic data for intention prediction algorithms regardless of the human anatomy and used sensor principles. The necessity for more sophisticated sensor matrices in the field of myographic human-machine interfaces is essential, and the community already called out for new sensor solutions. This work follows the neuromechanics of the human and designs customized sensor principles to acquire the occurring phenomena. Three low-cost sensor modalities Electromyography, Mechanomyography, and Force Myography) were developed in a miniaturized size and tested in a pre-evaluation study. All three sensors comprise the characteristic myographic information of its modality. Based on the pre-evaluated sensors, a sensor matrix with 32 exchangeable and high-density sensor modules was designed. The sensor matrix can be applied around the human limbs and takes the human anatomy into account. A data transmission protocol was customized for interfacing the sensor matrix to the periphery with reduced wiring. The designed sensor matrix offers high-density and multimodal myographic information for the field of human-machine interfaces. Especially the fields of prosthetics and telepresence can benefit from the higher spatial resolution of the sensor matrix

    Classification of 41 Hand and Wrist Movements via Surface Electromyogram Using Deep Neural Network

    Get PDF
    Surface electromyography (sEMG) is a non-invasive and straightforward way to allow the user to actively control the prosthesis. However, results reported by previous studies on using sEMG for hand and wrist movement classification vary by a large margin, due to several factors including but not limited to the number of classes and the acquisition protocol. The objective of this paper is to investigate the deep neural network approach on the classification of 41 hand and wrist movements based on the sEMG signal. The proposed models were trained and evaluated using the publicly available database from the Ninapro project, one of the largest public sEMG databases for advanced hand myoelectric prosthetics. Two datasets, DB5 with a low-cost 16 channels and 200 Hz sampling rate setup and DB7 with 12 channels and 2 kHz sampling rate setup, were used for this study. Our approach achieved an overall accuracy of 93.87 ± 1.49 and 91.69 ± 4.68% with a balanced accuracy of 84.00 ± 3.40 and 84.66 ± 4.78% for DB5 and DB7, respectively. We also observed a performance gain when considering only a subset of the movements, namely the six main hand movements based on six prehensile patterns from the Southampton Hand Assessment Procedure (SHAP), a clinically validated hand functional assessment protocol. Classification on only the SHAP movements in DB5 attained an overall accuracy of 98.82 ± 0.58% with a balanced accuracy of 94.48 ± 2.55%. With the same set of movements, our model also achieved an overall accuracy of 99.00% with a balanced accuracy of 91.27% on data from one of the amputee participants in DB7. These results suggest that with more data on the amputee subjects, our proposal could be a promising approach for controlling versatile prosthetic hands with a wide range of predefined hand and wrist movements

    Guidage non-intrusif d'un bras robotique à l'aide d'un bracelet myoélectrique à électrode sÚche

    Get PDF
    Depuis plusieurs annĂ©es la robotique est vue comme une solution clef pour amĂ©liorer la qualitĂ© de vie des personnes ayant subi une amputation. Pour crĂ©er de nouvelles prothĂšses intelligentes qui peuvent ĂȘtre facilement intĂ©grĂ©es Ă  la vie quotidienne et acceptĂ©e par ces personnes, celles-ci doivent ĂȘtre non-intrusives, fiables et peu coĂ»teuses. L’électromyographie de surface fournit une interface intuitive et non intrusive basĂ©e sur l’activitĂ© musculaire de l’utilisateur permettant d’interagir avec des robots. Cependant, malgrĂ© des recherches approfondies dans le domaine de la classification des signaux sEMG, les classificateurs actuels manquent toujours de fiabilitĂ©, car ils ne sont pas robustes face au bruit Ă  court terme (par exemple, petit dĂ©placement des Ă©lectrodes, fatigue musculaire) ou Ă  long terme (par exemple, changement de la masse musculaire et des tissus adipeux) et requiert donc de recalibrer le classifieur de façon pĂ©riodique. L’objectif de mon projet de recherche est de proposer une interface myoĂ©lectrique humain-robot basĂ© sur des algorithmes d’apprentissage par transfert et d’adaptation de domaine afin d’augmenter la fiabilitĂ© du systĂšme Ă  long-terme, tout en minimisant l’intrusivitĂ© (au niveau du temps de prĂ©paration) de ce genre de systĂšme. L’aspect non intrusif est obtenu en utilisant un bracelet Ă  Ă©lectrode sĂšche possĂ©dant dix canaux. Ce bracelet (3DC Armband) est de notre (Docteur Gabriel Gagnon-Turcotte, mes co-directeurs et moi-mĂȘme) conception et a Ă©tĂ© rĂ©alisĂ© durant mon doctorat. À l’heure d’écrire ces lignes, le 3DC Armband est le bracelet sans fil pour l’enregistrement de signaux sEMG le plus performant disponible. Contrairement aux dispositifs utilisant des Ă©lectrodes Ă  base de gel qui nĂ©cessitent un rasage de l’avant-bras, un nettoyage de la zone de placement et l’application d’un gel conducteur avant l’utilisation, le brassard du 3DC peut simplement ĂȘtre placĂ© sur l’avant-bras sans aucune prĂ©paration. Cependant, cette facilitĂ© d’utilisation entraĂźne une diminution de la qualitĂ© de l’information du signal. Cette diminution provient du fait que les Ă©lectrodes sĂšches obtiennent un signal plus bruitĂ© que celle Ă  base de gel. En outre, des mĂ©thodes invasives peuvent rĂ©duire les dĂ©placements d’électrodes lors de l’utilisation, contrairement au brassard. Pour remĂ©dier Ă  cette dĂ©gradation de l’information, le projet de recherche s’appuiera sur l’apprentissage profond, et plus prĂ©cisĂ©ment sur les rĂ©seaux convolutionels. Le projet de recherche a Ă©tĂ© divisĂ© en trois phases. La premiĂšre porte sur la conception d’un classifieur permettant la reconnaissance de gestes de la main en temps rĂ©el. La deuxiĂšme porte sur l’implĂ©mentation d’un algorithme d’apprentissage par transfert afin de pouvoir profiter des donnĂ©es provenant d’autres personnes, permettant ainsi d’amĂ©liorer la classification des mouvements de la main pour un nouvel individu tout en diminuant le temps de prĂ©paration nĂ©cessaire pour utiliser le systĂšme. La troisiĂšme phase consiste en l’élaboration et l’implĂ©mentation des algorithmes d’adaptation de domaine et d’apprentissage faiblement supervisĂ© afin de crĂ©er un classifieur qui soit robuste au changement Ă  long terme.For several years, robotics has been seen as a key solution to improve the quality of life of people living with upper-limb disabilities. To create new, smart prostheses that can easily be integrated into everyday life, they must be non-intrusive, reliable and inexpensive. Surface electromyography provides an intuitive interface based on a user’s muscle activity to interact with robots. However, despite extensive research in the field of sEMG signal classification, current classifiers still lack reliability due to their lack of robustness to short-term (e.g. small electrode displacement, muscle fatigue) or long-term (e.g. change in muscle mass and adipose tissue) noise. In practice, this mean that to be useful, classifier needs to be periodically re-calibrated, a time consuming process. The goal of my research project is to proposes a human-robot myoelectric interface based on transfer learning and domain adaptation algorithms to increase the reliability of the system in the long term, while at the same time reducing the intrusiveness (in terms of hardware and preparation time) of this kind of systems. The non-intrusive aspect is achieved from a dry-electrode armband featuring ten channels. This armband, named the 3DC Armband is from our (Dr. Gabriel Gagnon-Turcotte, my co-directors and myself) conception and was realized during my doctorate. At the time of writing, the 3DC Armband offers the best performance for currently available dry-electrodes, surface electromyographic armbands. Unlike gel-based electrodes which require intrusive skin preparation (i.e. shaving, cleaning the skin and applying conductive gel), the 3DC Armband can simply be placed on the forearm without any preparation. However, this ease of use results in a decrease in the quality of information. This decrease is due to the fact that the signal recorded by dry electrodes is inherently noisier than gel-based ones. In addition, other systems use invasive methods (intramuscular electromyography) to capture a cleaner signal and reduce the source of noises (e.g. electrode shift). To remedy this degradation of information resulting from the non-intrusiveness of the armband, this research project will rely on deep learning, and more specifically on convolutional networks. The research project was divided into three phases. The first is the design of a classifier allowing the recognition of hand gestures in real-time. The second is the implementation of a transfer learning algorithm to take advantage of the data recorded across multiple users, thereby improving the system’s accuracy, while decreasing the time required to use the system. The third phase is the development and implementation of a domain adaptation and self-supervised learning to enhance the classifier’s robustness to long-term changes

    Application of wearable sensors in actuation and control of powered ankle exoskeletons: a Comprehensive Review

    Get PDF
    Powered ankle exoskeletons (PAEs) are robotic devices developed for gait assistance, rehabilitation, and augmentation. To fulfil their purposes, PAEs vastly rely heavily on their sensor systems. Human–machine interface sensors collect the biomechanical signals from the human user to inform the higher level of the control hierarchy about the user’s locomotion intention and requirement, whereas machine–machine interface sensors monitor the output of the actuation unit to ensure precise tracking of the high-level control commands via the low-level control scheme. The current article aims to provide a comprehensive review of how wearable sensor technology has contributed to the actuation and control of the PAEs developed over the past two decades. The control schemes and actuation principles employed in the reviewed PAEs, as well as their interaction with the integrated sensor systems, are investigated in this review. Further, the role of wearable sensors in overcoming the main challenges in developing fully autonomous portable PAEs is discussed. Finally, a brief discussion on how the recent technology advancements in wearable sensors, including environment—machine interface sensors, could promote the future generation of fully autonomous portable PAEs is provided
    corecore