41 research outputs found

    Design of a low-cost sensor matrix for use in human-machine interactions on the basis of myographic information

    Get PDF
    Myographic sensor matrices in the field of human-machine interfaces are often poorly developed and not pushing the limits in terms of a high spatial resolution. Many studies use sensor matrices as a tool to access myographic data for intention prediction algorithms regardless of the human anatomy and used sensor principles. The necessity for more sophisticated sensor matrices in the field of myographic human-machine interfaces is essential, and the community already called out for new sensor solutions. This work follows the neuromechanics of the human and designs customized sensor principles to acquire the occurring phenomena. Three low-cost sensor modalities Electromyography, Mechanomyography, and Force Myography) were developed in a miniaturized size and tested in a pre-evaluation study. All three sensors comprise the characteristic myographic information of its modality. Based on the pre-evaluated sensors, a sensor matrix with 32 exchangeable and high-density sensor modules was designed. The sensor matrix can be applied around the human limbs and takes the human anatomy into account. A data transmission protocol was customized for interfacing the sensor matrix to the periphery with reduced wiring. The designed sensor matrix offers high-density and multimodal myographic information for the field of human-machine interfaces. Especially the fields of prosthetics and telepresence can benefit from the higher spatial resolution of the sensor matrix

    Advances in Integrated Circuits and Systems for Wearable Biomedical Electrical Impedance Tomography

    Get PDF
    Electrical impedance tomography (EIT) is an impedance mapping technique that can be used to image the inner impedance distribution of the subject under test. It is non-invasive, inexpensive and radiation-free, while at the same time it can facilitate long-term and real-time dynamic monitoring. Thus, EIT lends itself particularly well to the development of a bio-signal monitoring/imaging system in the form of wearable technology. This work focuses on EIT system hardware advancement using complementary metal oxide semiconductor (CMOS) technology. It presents the design and testing of application specific integrated circuit (ASIC) and their successful use in two bio-medical applications, namely, neonatal lung function monitoring and human-machine interface (HMI) for prosthetic hand control. Each year fifteen million babies are born prematurely, and up to 30% suffer from lung disease. Although respiratory support, especially mechanical ventilation, can improve their survival, it also can cause injury to their vulnerable lungs resulting in severe and chronic pulmonary morbidity lasting into adulthood, thus an integrated wearable EIT system for neonatal lung function monitoring is urgently needed. In this work, two wearable belt systems are presented. The first belt features a miniaturized active electrode module built around an analog front-end ASIC which is fabricated with 0.35-µm high-voltage process technology with ±9 V power supplies and occupies a total die area of 3.9 mm². The ASIC offers a high power active current driver capable of up to 6 mAp-p output, and wideband active buffer for EIT recording as well as contact impedance monitoring. The belt has a bandwidth of 500 kHz, and an image frame rate of 107 frame/s. To further improve the system, the active electrode module is integrated into one ASIC. It contains a fully differential current driver, a current feedback instrumentation amplifier (IA), a digital controller and multiplexors with a total die area of 9.6 mm². Compared to the conventional active electrode architecture employed in the first EIT belt, the second belt features a new architecture. It allows programmable flexible electrode current drive and voltage sense patterns under simple digital control. It has intimate connections to the electrodes for the current drive and to the IA for direct differential voltage measurement providing superior common-mode rejection ratio (CMRR) up to 74 dB, and with active gain, the noise level can be reduced by a factor of √3 using the adjacent scan. The second belt has a wider operating bandwidth of 1 MHz and multi-frequency operation. The image frame rate is 122 frame/s, the fastest wearable EIT reported to date. It measures impedance with 98% accuracy and has less than 0.5 Ω and 1° variation across all channels. In addition the ASIC facilitates several other functionalities to provide supplementary clinical information at the bedside. With the advancement of technology and the ever-increasing fusion of computer and machine into daily life, a seamless HMI system that can recognize hand gestures and motions and allow the control of robotic machines or prostheses to perform dexterous tasks, is a target of research. Originally developed as an imaging technique, EIT can be used with a machine learning technique to track bones and muscles movement towards understanding the human user’s intentions and ultimately controlling prosthetic hand applications. For this application, an analog front-end ASIC is designed using 0.35-µm standard process technology with ±1.65 V power supplies. It comprises a current driver capable of differential drive and a low noise (9μVrms) IA with a CMRR of 80 dB. The function modules occupy an area of 0.07 mm². Using the ASIC, a complete HMI system based on the EIT principle for hand prosthesis control has been presented, and the user’s forearm inner bio-impedance redistribution is assessed. Using artificial neural networks, bio-impedance redistribution can be learned so as to recognise the user’s intention in real-time for prosthesis operation. In this work, eleven hand motions are designed for prosthesis operation. Experiments with five subjects show that the system can achieve an overall recognition accuracy of 95.8%

    Novel Muscle Monitoring by Radiomyography(RMG) and Application to Hand Gesture Recognition

    Full text link
    Conventional electromyography (EMG) measures the continuous neural activity during muscle contraction, but lacks explicit quantification of the actual contraction. Mechanomyography (MMG) and accelerometers only measure body surface motion, while ultrasound, CT-scan and MRI are restricted to in-clinic snapshots. Here we propose a novel radiomyography (RMG) for continuous muscle actuation sensing that can be wearable and touchless, capturing both superficial and deep muscle groups. We verified RMG experimentally by a forearm wearable sensor for detailed hand gesture recognition. We first converted the radio sensing outputs to the time-frequency spectrogram, and then employed the vision transformer (ViT) deep learning network as the classification model, which can recognize 23 gestures with an average accuracy up to 99% on 8 subjects. By transfer learning, high adaptivity to user difference and sensor variation were achieved at an average accuracy up to 97%. We further demonstrated RMG to monitor eye and leg muscles and achieved high accuracy for eye movement and body postures tracking. RMG can be used with synchronous EMG to derive stimulation-actuation waveforms for many future applications in kinesiology, physiotherapy, rehabilitation, and human-machine interface

    Biosignal‐based human–machine interfaces for assistance and rehabilitation : a survey

    Get PDF
    As a definition, Human–Machine Interface (HMI) enables a person to interact with a device. Starting from elementary equipment, the recent development of novel techniques and unobtrusive devices for biosignals monitoring paved the way for a new class of HMIs, which take such biosignals as inputs to control various applications. The current survey aims to review the large literature of the last two decades regarding biosignal‐based HMIs for assistance and rehabilitation to outline state‐of‐the‐art and identify emerging technologies and potential future research trends. PubMed and other databases were surveyed by using specific keywords. The found studies were further screened in three levels (title, abstract, full‐text), and eventually, 144 journal papers and 37 conference papers were included. Four macrocategories were considered to classify the different biosignals used for HMI control: biopotential, muscle mechanical motion, body motion, and their combinations (hybrid systems). The HMIs were also classified according to their target application by considering six categories: prosthetic control, robotic control, virtual reality control, gesture recognition, communication, and smart environment control. An ever‐growing number of publications has been observed over the last years. Most of the studies (about 67%) pertain to the assistive field, while 20% relate to rehabilitation and 13% to assistance and rehabilitation. A moderate increase can be observed in studies focusing on robotic control, prosthetic control, and gesture recognition in the last decade. In contrast, studies on the other targets experienced only a small increase. Biopotentials are no longer the leading control signals, and the use of muscle mechanical motion signals has experienced a considerable rise, especially in prosthetic control. Hybrid technologies are promising, as they could lead to higher performances. However, they also increase HMIs’ complex-ity, so their usefulness should be carefully evaluated for the specific application

    The "Federica" hand: a simple, very efficient prothesis

    Get PDF
    Hand prostheses partially restore hand appearance and functionalities. Not everyone can afford expensive prostheses and many low-cost prostheses have been proposed. In particular, 3D printers have provided great opportunities by simplifying the manufacturing process and reducing costs. Generally, active prostheses use multiple motors for fingers movement and are controlled by electromyographic (EMG) signals. The "Federica" hand is a single motor prosthesis, equipped with an adaptive grasp and controlled by a force-myographic signal. The "Federica" hand is 3D printed and has an anthropomorphic morphology with five fingers, each consisting of three phalanges. The movement generated by a single servomotor is transmitted to the fingers by inextensible tendons that form a closed chain; practically, no springs are used for passive hand opening. A differential mechanical system simultaneously distributes the motor force in predefined portions on each finger, regardless of their actual positions. Proportional control of hand closure is achieved by measuring the contraction of residual limb muscles by means of a force sensor, replacing the EMG. The electrical current of the servomotor is monitored to provide the user with a sensory feedback of the grip force, through a small vibration motor. A simple Arduino board was adopted as processing unit. The differential mechanism guarantees an efficient transfer of mechanical energy from the motor to the fingers and a secure grasp of any object, regardless of its shape and deformability. The force sensor, being extremely thin, can be easily embedded into the prosthesis socket and positioned on both muscles and tendons; it offers some advantages over the EMG as it does not require any electrical contact or signal processing to extract information about the muscle contraction intensity. The grip speed is high enough to allow the user to grab objects on the fly: from the muscle trigger until to the complete hand closure, "Federica" takes about half a second. The cost of the device is about 100 US$. Preliminary tests carried out on a patient with transcarpal amputation, showed high performances in controlling the prosthesis, after a very rapid training session. The "Federica" hand turned out to be a lightweight, low-cost and extremely efficient prosthesis. The project is intended to be open-source: all the information needed to produce the prosthesis (e.g. CAD files, circuit schematics, software) can be downloaded from a public repository. Thus, allowing everyone to use the "Federica" hand and customize or improve it

    Novel Bidirectional Body - Machine Interface to Control Upper Limb Prosthesis

    Get PDF
    Objective. The journey of a bionic prosthetic user is characterized by the opportunities and limitations involved in adopting a device (the prosthesis) that should enable activities of daily living (ADL). Within this context, experiencing a bionic hand as a functional (and, possibly, embodied) limb constitutes the premise for mitigating the risk of its abandonment through the continuous use of the device. To achieve such a result, different aspects must be considered for making the artificial limb an effective support for carrying out ADLs. Among them, intuitive and robust control is fundamental to improving amputees’ quality of life using upper limb prostheses. Still, as artificial proprioception is essential to perceive the prosthesis movement without constant visual attention, a good control framework may not be enough to restore practical functionality to the limb. To overcome this, bidirectional communication between the user and the prosthesis has been recently introduced and is a requirement of utmost importance in developing prosthetic hands. Indeed, closing the control loop between the user and a prosthesis by providing artificial sensory feedback is a fundamental step towards the complete restoration of the lost sensory-motor functions. Within my PhD work, I proposed the development of a more controllable and sensitive human-like hand prosthesis, i.e., the Hannes prosthetic hand, to improve its usability and effectiveness. Approach. To achieve the objectives of this thesis work, I developed a modular and scalable software and firmware architecture to control the Hannes prosthetic multi-Degree of Freedom (DoF) system and to fit all users’ needs (hand aperture, wrist rotation, and wrist flexion in different combinations). On top of this, I developed several Pattern Recognition (PR) algorithms to translate electromyographic (EMG) activity into complex movements. However, stability and repeatability were still unmet requirements in multi-DoF upper limb systems; hence, I started by investigating different strategies to produce a more robust control. To do this, EMG signals were collected from trans-radial amputees using an array of up to six sensors placed over the skin. Secondly, I developed a vibrotactile system to implement haptic feedback to restore proprioception and create a bidirectional connection between the user and the prosthesis. Similarly, I implemented an object stiffness detection to restore tactile sensation able to connect the user with the external word. This closed-loop control between EMG and vibration feedback is essential to implementing a Bidirectional Body - Machine Interface to impact amputees’ daily life strongly. For each of these three activities: (i) implementation of robust pattern recognition control algorithms, (ii) restoration of proprioception, and (iii) restoration of the feeling of the grasped object's stiffness, I performed a study where data from healthy subjects and amputees was collected, in order to demonstrate the efficacy and usability of my implementations. In each study, I evaluated both the algorithms and the subjects’ ability to use the prosthesis by means of the F1Score parameter (offline) and the Target Achievement Control test-TAC (online). With this test, I analyzed the error rate, path efficiency, and time efficiency in completing different tasks. Main results. Among the several tested methods for Pattern Recognition, the Non-Linear Logistic Regression (NLR) resulted to be the best algorithm in terms of F1Score (99%, robustness), whereas the minimum number of electrodes needed for its functioning was determined to be 4 in the conducted offline analyses. Further, I demonstrated that its low computational burden allowed its implementation and integration on a microcontroller running at a sampling frequency of 300Hz (efficiency). Finally, the online implementation allowed the subject to simultaneously control the Hannes prosthesis DoFs, in a bioinspired and human-like way. In addition, I performed further tests with the same NLR-based control by endowing it with closed-loop proprioceptive feedback. In this scenario, the results achieved during the TAC test obtained an error rate of 15% and a path efficiency of 60% in experiments where no sources of information were available (no visual and no audio feedback). Such results demonstrated an improvement in the controllability of the system with an impact on user experience. Significance. The obtained results confirmed the hypothesis of improving robustness and efficiency of a prosthetic control thanks to of the implemented closed-loop approach. The bidirectional communication between the user and the prosthesis is capable to restore the loss of sensory functionality, with promising implications on direct translation in the clinical practice

    Wearable Smart Rings for Multi-Finger Gesture Recognition Using Supervised Learning

    Get PDF
    This thesis presents a wearable, smart ring with an integrated Bluetooth low-energy (BLE) module. The system uses an accelerometer and a gyroscope to collect fingers motion data. A prototype was manufactured, and its performance was tested. To detect complex finger movements, two rings are worn on the point and thumb fingers while performing the gestures. Nine pre-defined finger movements were introduced to verify the feasibility of the proposed method. Data pre-processing techniques, including normalization, statistical feature extraction, random forest recursive feature elimination (RF-RFE), and k-nearest neighbors sequential forward floating selection (KNN-SFFS), were applied to select well-distinguished feature vectors to enhance gesture recognition accuracy. Three supervised machine learning algorithms were used for gesture classification purposes, namely Support Vector Machine (SVM), K-Nearest Neighbors (KNN), and Naive Bayes (NB). We demonstrated that when utilizing the KNN-SFFS recommended features as the machine learning input, our proposed finger gesture recognition approach not only significantly decreases the dimension of the feature vector, results in faster response time and prevents overfitted model, but also provides approximately similar machine learning prediction accuracy compared to when all elements of feature vectors were used. By using the KNN as the primary classifier, the system can accurately recognize six one-finger and three two-finger gestures with 97.1% and 97.0% accuracy, respectively

    Uma arquitetura de telerreabilitação baseada em realidade aumentada para apoiar o treinamento de usuários de cadeiras de rodas motorizadas

    Get PDF
    Many people worldwide have been experimenting a decrease in their mobility as a result of aging, accidents and degenerative diseases. In many cases, a Powered Wheelchair (PW) is an alternative help. Currently, in Brazil, patients can receive a PW from the Unified Health System, following prescription criteria. However, they do not have an appropriate previous training for driving the PW. Consequently, users might suffer accidents since a customized training protocol is not available. Nevertheless, due to financial and/or health limitations, many users are unable to attend a rehabilitation center. To overcome these limitations, we developed an Augmented Reality (AR) Telerehabilitation System Architecture based on the Power Mobility Road Test (PMRT), for supporting PW user’s training. In this system, the therapists can remotely customize and evaluate training tasks and the user can perform the training in safer conditions. Video stream and data transfer between each environment were made possible through UDP (User Datagram Protocol). To evaluate and present the system architecture potential, a preliminary test was conducted with 3 spinal cord injury participants. They performed 3 basic training protocols defined by a therapist. The following metrics were adopted for evaluation: number of control commands; elapsed time; number of collisions; biosignals and a questionary was used to evaluate system features by participants. Results demonstrate the specific needs of individuals using a PW, thanks to adopted (qualitative and emotional) metrics. Also, the results have shown the potential of the training system with customizable protocols to fulfill these needs. User’s evaluation demonstrates that the combination of AR techniques with PMRT adaptations, increases user’s well-being after training sessions. Furthermore, a training experience helps users to overcome their displacement problems, as well as for appointing challenges before large scale use. The proposed system architecture allows further studies on telerehabilitation of PW users.CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível SuperiorTese (Doutorado)Muitas pessoas em todo o mundo estão vivenciando uma diminuição de sua mobili- dade como resultado de envelhecimento, acidentes e doenças degenerativas. Em muitos casos, uma cadeira de rodas motorizada (CRM) é uma ajuda alternativa. Atualmente, no Brasil, os pacientes podem receber uma CRM do Sistema Único de Saúde, seguindo os critérios de prescrição. No entanto, eles não têm um treinamento prévio apropriado para dirigir a CRM. Conseqüentemente, os usuários podem sofrer acidentes, pois um protocolo de treinamento personalizado não está disponível. Além disto, devido a limi- tações financeiras e / ou de saúde, muitos usuários não podem comparecer a um centro de reabilitação. Para superar essas limitações, desenvolvemos uma arquitetura de sistema de telereabilitação com Realidade Aumentada (RA) baseado no PMRT (Power Mobility Road Test), para apoiar o treinamento de usuários de CRM. Nesse sistema, os terapeutas podem personalizar e avaliar remotamente as tarefas de treinamento e o usuário pode realizar o treinamento em condições mais seguras. O fluxo de vídeo e a transferência de dados entre cada ambiente foram possíveis através do UDP (User Datagram Protocol). Para avaliar e apresentar o potencial da arquitetura do sistema, foi realizado um teste preliminar de três participantes com lesão medular. Eles realizaram três protocolos básicos de treinamento definidos por um terapeuta. As seguintes métricas adotadas para avaliação foram: número de comandos de controle; tempo decorrido; número de colisões e biossinais. Além disso, um questionário foi usado para avaliar os recursos do sistema. Os resultados demonstram as necessidades específicas dos indivíduos que usam uma CRM, graças às métricas adotadas (qualitativas e emocionais). Além disso, os resultados mostraram o potencial do sistema de treinamento com protocolos personalizáveis para atender a essas necessidades. A avaliação do usuário demonstra que a combinação de técnicas de RA com adaptações PMRT aumenta o bem-estar do usuário após as sessões de treinamento. Além disso, esta experiência de treinamento ajuda os usuários a superar seus problemas de deslocamento, bem como a apontar desafios antes do uso em larga escala. A arquitetura de sistema proposta, permite estudos adicionais sobre a telerreabilitação de usuários de CRM
    corecore