40 research outputs found

    Optimization of Forcemyography Sensor Placement for Arm Movement Recognition

    Full text link
    How to design an optimal wearable device for human movement recognition is vital to reliable and accurate human-machine collaboration. Previous works mainly fabricate wearable devices heuristically. Instead, this paper raises an academic question: can we design an optimization algorithm to optimize the fabrication of wearable devices such as figuring out the best sensor arrangement automatically? Specifically, this work focuses on optimizing the placement of Forcemyography (FMG) sensors for FMG armbands in the application of arm movement recognition. Firstly, based on graph theory, the armband is modeled considering sensors' signals and connectivity. Then, a Graph-based Armband Modeling Network (GAM-Net) is introduced for arm movement recognition. Afterward, the sensor placement optimization for FMG armbands is formulated and an optimization algorithm with greedy local search is proposed. To study the effectiveness of our optimization algorithm, a dataset for mechanical maintenance tasks using FMG armbands with 16 sensors is collected. Our experiments show that using only 4 sensors optimized with our algorithm can help maintain a comparable recognition accuracy to using all sensors. Finally, the optimized sensor placement result is verified from a physiological view. This work would like to shed light on the automatic fabrication of wearable devices considering downstream tasks, such as human biological signal collection and movement recognition. Our code and dataset are available at https://github.com/JerryX1110/IROS22-FMG-Sensor-OptimizationComment: 6 pages, 10 figures, Accepted by IROS22 (The 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS

    Data and Sensor Fusion Using FMG, sEMG and IMU Sensors for Upper Limb Prosthesis Control

    Get PDF
    Whether someone is born with a missing limb or an amputation occurs later in life, living with this disability can be extremely challenging. The robotic prosthetic devices available today are capable of giving users more functionality, but the methods available to control these prostheses restrict their use to simple actions, and are part of the reason why users often reject prosthetic technologies. Using multiple myography modalities has been a promising approach to address these control limitations; however, only two myography modalities have been rigorously tested so far, and while the results have shown improvements, they have not been robust enough for out-of-lab use. In this work, a novel multi-modal device that allows data to be collected from three myography modalities was created. Force myography (FMG), surface electromyography (sEMG), and inertial measurement unit (IMU) sensors were integrated into a wearable armband and used to collect signal data while subjects performed gestures important for the activities of daily living. An established machine learning algorithm was used to decipher the signals to predict the user\u27s intent/gesture being held, which could be used to control a prosthetic device. Using all three modalities provided statistically-significant improvements over most other modality combinations, as it provided the most accurate and consistent classification results. This work provides justification for using three sensing modalities and future work is suggested to explore this modality combination to decipher more complex actions and tasks with more sophisticated pattern recognition algorithms

    Addressing the challenges posed by human machine interfaces based on force sensitive resistors for powered prostheses

    Get PDF
    Despite the advancements in the mechatronics aspect of prosthetic devices, prostheses control still lacks an interface that satisfies the needs of the majority of users. The research community has put great effort into the advancements of prostheses control techniques to address users’ needs. However, most of these efforts are focused on the development and assessment of technologies in the controlled environments of laboratories. Such findings do not fully transfer to the daily application of prosthetic systems. The objectives of this thesis focus on factors that affect the use of Force Myography (FMG) controlled prostheses in practical scenarios. The first objective of this thesis assessed the use of FMG as an alternative or synergist Human Machine Interface (HMI) to the more traditional HMI, i.e. surface Electromyography (sEMG). The assessment for this study was conducted in conditions that are relatively close to the real use case of prosthetic applications. The HMI was embedded in the custom prosthetic prototype that was developed for the pilot participant of the study using an off-the-shelf prosthetic end effector. Moreover, prostheses control was assessed as the user moved their limb in a dynamic protocol.The results of the aforementioned study motivated the second objective of this thesis: to investigate the possibility of reducing the complexity of high density FMG systems without sacrificing classification accuracies. This was achieved through a design method that uses a high density FMG apparatus and feature selection to determine the number and location of sensors that can be eliminated without significantly sacrificing the system’s performance. The third objective of this thesis investigated two of the factors that contribute to increased errors in force sensitive resistor (FSR) signals used in FMG controlled prostheses: bending of force sensors and variations in the volume of the residual limb. Two studies were conducted that proposed solutions to mitigate the negative impact of these factors. The incorporation of these solutions into prosthetic devices is discussed in these studies.It was demonstrated that FMG is a promising HMI for prostheses control. The facilitation of pattern recognition with FMG showed potential for intuitive prosthetic control. Moreover, a method for the design of a system that can determine the required number of sensors and their locations on each individual to achieve a simpler system with comparable performance to high density FMG systems was proposed and tested. The effects of the two factors considered in the third objective were determined. It was also demonstrated that the proposed solutions in the studies conducted for this objective can be used to increase the accuracy of signals that are commonly used in FMG controlled prostheses

    Improving Human-Robot Cooperation and Safety In The Shared Automated Workplace

    Get PDF
    Modern industries take advantage of human-robot interaction to facilitate better manufacturing processes, particularly in applications where a human is working in a shared workplace with robots. In manufacturing settings where separation barriers, such as fences, are not used to protect human workers, approaches should be implemented for guaranteeing human safety. Despite existing methods, which define specifications and scenarios for human-robot cooperation in industry, new approaches are needed to provide a safer workplace while enhancing productivity. This thesis provides collision-free techniques for safe human-robot collaboration in an industrial setting. Human-robot interaction in the industry is studied to develop novel solutions and provide a secure and productive industrial environment. Providing a safe distance between a human worker and a manipulating robot, to prevent a collision, is an important subject of this work. This thesis presents a safe workplace by proposing an effective human-tracking method using a sensor network. The proposed technique utilizes a non-linear Kalman filter and Gaussian optimization to reduce the risk of collision between humans and robots. In this regard, selecting the most sensitive sensors to update the Kalman filter’s gain in a noisy environment is crucial. To this end, reliable sensor selection schemes are investigated, and a strategy based on multi-objective optimization is implemented.Finally, safe human-robot cooperation is investigated where humans work close to the robot or directly manipulate it in a shared task. In this case, the human’s hand is the most vulnerable limb and should be protected to achieve safe interaction. In this thesis, force myography (FMG) is used to detect the human hand activities to recognize hand gestures, detect the exerted force by a worker\u27s hand, and predict human intention. This information is then used to control the robot parameters, such as the gripper’s force. Furthermore, a human intention prediction scheme using FMG features and based on recurrent neural network (RNN) topology is proposed, to ensure safety during several industrial collaboration scenarios.The validity of the proposed approaches and the performance of the suggested control techniques are demonstrated through extensive simulation and practical experimentation. The results show that the proposed approaches will reduce the collision risk in human-robo

    FMG- and RNN-Based Estimation of Motor Intention of Upper-Limb Motion in Human-Robot Collaboration

    Get PDF
    Research on human-robot interactions has been driven by the increasing employment of robotic manipulators in manufacturing and production. Toward developing more effective human-robot collaboration during shared tasks, this paper proposes an interaction scheme by employing machine learning algorithms to interpret biosignals acquired from the human user and accordingly planning the robot reaction. More specifically, a force myography (FMG) band was wrapped around the user\u27s forearm and was used to collect information about muscle contractions during a set of collaborative tasks between the user and an industrial robot. A recurrent neural network model was trained to estimate the user\u27s hand movement pattern based on the collected FMG data to determine whether the performed motion was random or intended as part of the predefined collaborative tasks. Experimental evaluation during two practical collaboration scenarios demonstrated that the trained model could successfully estimate the category of hand motion, i.e., intended or random, such that the robot either assisted with performing the task or changed its course of action to avoid collision. Furthermore, proximity sensors were mounted on the robotic arm to investigate if monitoring the distance between the user and the robot had an effect on the outcome of the collaborative effort. While further investigation is required to rigorously establish the safety of the human worker, this study demonstrates the potential of FMG-based wearable technologies to enhance human-robot collaboration in industrial settings

    Design of a low-cost sensor matrix for use in human-machine interactions on the basis of myographic information

    Get PDF
    Myographic sensor matrices in the field of human-machine interfaces are often poorly developed and not pushing the limits in terms of a high spatial resolution. Many studies use sensor matrices as a tool to access myographic data for intention prediction algorithms regardless of the human anatomy and used sensor principles. The necessity for more sophisticated sensor matrices in the field of myographic human-machine interfaces is essential, and the community already called out for new sensor solutions. This work follows the neuromechanics of the human and designs customized sensor principles to acquire the occurring phenomena. Three low-cost sensor modalities Electromyography, Mechanomyography, and Force Myography) were developed in a miniaturized size and tested in a pre-evaluation study. All three sensors comprise the characteristic myographic information of its modality. Based on the pre-evaluated sensors, a sensor matrix with 32 exchangeable and high-density sensor modules was designed. The sensor matrix can be applied around the human limbs and takes the human anatomy into account. A data transmission protocol was customized for interfacing the sensor matrix to the periphery with reduced wiring. The designed sensor matrix offers high-density and multimodal myographic information for the field of human-machine interfaces. Especially the fields of prosthetics and telepresence can benefit from the higher spatial resolution of the sensor matrix

    Novel Bidirectional Body - Machine Interface to Control Upper Limb Prosthesis

    Get PDF
    Objective. The journey of a bionic prosthetic user is characterized by the opportunities and limitations involved in adopting a device (the prosthesis) that should enable activities of daily living (ADL). Within this context, experiencing a bionic hand as a functional (and, possibly, embodied) limb constitutes the premise for mitigating the risk of its abandonment through the continuous use of the device. To achieve such a result, different aspects must be considered for making the artificial limb an effective support for carrying out ADLs. Among them, intuitive and robust control is fundamental to improving amputees’ quality of life using upper limb prostheses. Still, as artificial proprioception is essential to perceive the prosthesis movement without constant visual attention, a good control framework may not be enough to restore practical functionality to the limb. To overcome this, bidirectional communication between the user and the prosthesis has been recently introduced and is a requirement of utmost importance in developing prosthetic hands. Indeed, closing the control loop between the user and a prosthesis by providing artificial sensory feedback is a fundamental step towards the complete restoration of the lost sensory-motor functions. Within my PhD work, I proposed the development of a more controllable and sensitive human-like hand prosthesis, i.e., the Hannes prosthetic hand, to improve its usability and effectiveness. Approach. To achieve the objectives of this thesis work, I developed a modular and scalable software and firmware architecture to control the Hannes prosthetic multi-Degree of Freedom (DoF) system and to fit all users’ needs (hand aperture, wrist rotation, and wrist flexion in different combinations). On top of this, I developed several Pattern Recognition (PR) algorithms to translate electromyographic (EMG) activity into complex movements. However, stability and repeatability were still unmet requirements in multi-DoF upper limb systems; hence, I started by investigating different strategies to produce a more robust control. To do this, EMG signals were collected from trans-radial amputees using an array of up to six sensors placed over the skin. Secondly, I developed a vibrotactile system to implement haptic feedback to restore proprioception and create a bidirectional connection between the user and the prosthesis. Similarly, I implemented an object stiffness detection to restore tactile sensation able to connect the user with the external word. This closed-loop control between EMG and vibration feedback is essential to implementing a Bidirectional Body - Machine Interface to impact amputees’ daily life strongly. For each of these three activities: (i) implementation of robust pattern recognition control algorithms, (ii) restoration of proprioception, and (iii) restoration of the feeling of the grasped object's stiffness, I performed a study where data from healthy subjects and amputees was collected, in order to demonstrate the efficacy and usability of my implementations. In each study, I evaluated both the algorithms and the subjects’ ability to use the prosthesis by means of the F1Score parameter (offline) and the Target Achievement Control test-TAC (online). With this test, I analyzed the error rate, path efficiency, and time efficiency in completing different tasks. Main results. Among the several tested methods for Pattern Recognition, the Non-Linear Logistic Regression (NLR) resulted to be the best algorithm in terms of F1Score (99%, robustness), whereas the minimum number of electrodes needed for its functioning was determined to be 4 in the conducted offline analyses. Further, I demonstrated that its low computational burden allowed its implementation and integration on a microcontroller running at a sampling frequency of 300Hz (efficiency). Finally, the online implementation allowed the subject to simultaneously control the Hannes prosthesis DoFs, in a bioinspired and human-like way. In addition, I performed further tests with the same NLR-based control by endowing it with closed-loop proprioceptive feedback. In this scenario, the results achieved during the TAC test obtained an error rate of 15% and a path efficiency of 60% in experiments where no sources of information were available (no visual and no audio feedback). Such results demonstrated an improvement in the controllability of the system with an impact on user experience. Significance. The obtained results confirmed the hypothesis of improving robustness and efficiency of a prosthetic control thanks to of the implemented closed-loop approach. The bidirectional communication between the user and the prosthesis is capable to restore the loss of sensory functionality, with promising implications on direct translation in the clinical practice
    corecore