119 research outputs found

    Intuitive Hand Teleoperation by Novice Operators Using a Continuous Teleoperation Subspace

    Full text link
    Human-in-the-loop manipulation is useful in when autonomous grasping is not able to deal sufficiently well with corner cases or cannot operate fast enough. Using the teleoperator's hand as an input device can provide an intuitive control method but requires mapping between pose spaces which may not be similar. We propose a low-dimensional and continuous teleoperation subspace which can be used as an intermediary for mapping between different hand pose spaces. We present an algorithm to project between pose space and teleoperation subspace. We use a non-anthropomorphic robot to experimentally prove that it is possible for teleoperation subspaces to effectively and intuitively enable teleoperation. In experiments, novice users completed pick and place tasks significantly faster using teleoperation subspace mapping than they did using state of the art teleoperation methods.Comment: ICRA 2018, 7 pages, 7 figures, 2 table

    比例筋電位制御に向けた筋シナジーの抽出、解釈、および応用の研究

    Get PDF
    Transfer of human intentions into myoelectric hand prostheses is generally achieved by learning a mapping, directly from sEMG signals to the Kinematics using linear or nonlinear regression approaches. Due to the highly random and nonlinear nature of sEMG signals such approaches are not able to exploit the functions of the modern pros- thesis, completely. Inspired from the muscle synergy hypothesis in the motor control community, some studies in the past have shown that better estimation accuracies can be achieved by learning a mapping to kinematics space from the synergistic features extracted from sEMG. However, mainly linear algorithms such as Principle Compo- nent Analysis (PCA), and Non-negative matrix factorization (NNMF) were employed to extract synergistic features, separately, from EMG and kinematics data and have not considered the nonlinearity and the strong correlation that exist between finger kine- matics and muscles. To exploit the relationship between EMG and Finger Kinematics for myoelectric control, we propose the use of the Manifold Relevance Determination (MRD) model (multi-view learning) to find the correspondence between muscular and kinematics by learning a shared low-dimensional representation. In the first part of the study, we present the approach of multi-view learning, interpretation of extracted non- linear muscle synergies from the joint study of sEMG and finger kinematics and their use in estimating the finger kinematics for the upper-limb prosthesis. Applicability of the proposed approach is then demonstrated by comparing the kinematics estimation accuracies against linear synergies and direct mapping. In the second part of the study, we propose a new approach to extract nonlinear muscle synergies from sEMG using multiview learning which addresses the two main drawbacks (1. Inconsistent synergistic patterns upon addition of sEMG signals from more muscles, 2. Weak metric for accessing the quality and quantity of muscle synergies) of established algorithms and discuss the potential of the proposed approach for reducing the number of electrodes with negligible degradation in predicted kinematics.九州工業大学博士学位論文 学位記番号:生工博甲第372号 学位授与年月日:令和2年3月25日1 Introduction|2 Related Work|3 Extraction of nonlinear synergies for proportional and simultaneous estimation of finger kinematics|4 An Approach to Extract Nonlinear Muscle Synergies from sEMG through Multi-Model Learning|5 Conclusion and Future Work九州工業大学令和元年

    Human to robot hand motion mapping methods: review and classification

    Get PDF
    In this article, the variety of approaches proposed in literature to address the problem of mapping human to robot hand motions are summarized and discussed. We particularly attempt to organize under macro-categories the great quantity of presented methods, that are often difficult to be seen from a general point of view due to different fields of application, specific use of algorithms, terminology and declared goals of the mappings. Firstly, a brief historical overview is reported, in order to provide a look on the emergence of the human to robot hand mapping problem as a both conceptual and analytical challenge that is still open nowadays. Thereafter, the survey mainly focuses on a classification of modern mapping methods under six categories: direct joint, direct Cartesian, taskoriented, dimensionality reduction based, pose recognition based and hybrid mappings. For each of these categories, the general view that associates the related reported studies is provided, and representative references are highlighted. Finally, a concluding discussion along with the authors’ point of view regarding future desirable trends are reported.This work was supported in part by the European Commission’s Horizon 2020 Framework Programme with the project REMODEL under Grant 870133 and in part by the Spanish Government under Grant PID2020-114819GB-I00.Peer ReviewedPostprint (published version

    Synergy-Based Human Grasp Representations and Semi-Autonomous Control of Prosthetic Hands

    Get PDF
    Das sichere und stabile Greifen mit humanoiden Roboterhänden stellt eine große Herausforderung dar. Diese Dissertation befasst sich daher mit der Ableitung von Greifstrategien für Roboterhände aus der Beobachtung menschlichen Greifens. Dabei liegt der Fokus auf der Betrachtung des gesamten Greifvorgangs. Dieser umfasst zum einen die Hand- und Fingertrajektorien während des Greifprozesses und zum anderen die Kontaktpunkte sowie den Kraftverlauf zwischen Hand und Objekt vom ersten Kontakt bis zum statisch stabilen Griff. Es werden nichtlineare posturale Synergien und Kraftsynergien menschlicher Griffe vorgestellt, die die Generierung menschenähnlicher Griffposen und Griffkräfte erlauben. Weiterhin werden Synergieprimitive als adaptierbare Repräsentation menschlicher Greifbewegungen entwickelt. Die beschriebenen, vom Menschen gelernten Greifstrategien werden für die Steuerung robotischer Prothesenhände angewendet. Im Rahmen einer semi-autonomen Steuerung werden menschenähnliche Greifbewegungen situationsgerecht vorgeschlagen und vom Nutzenden der Prothese überwacht

    An In-Depth Investigation of the Effects of Work-Related Factors on the Development of Knee Musculoskeletal Disorders among Construction Roofers

    Get PDF
    Construction roofers have the uppermost likelihood of developing knee musculoskeletal disorders (MSDs). Roofers spend more than 75% of their total working time being restricted to awkward kneeling postures and repetitive motions in a sloped roof setting. However, the combined effect of knee-straining posture, roof slope and their association to knee MSDs among roofers are still unknown. This dissertation aimed to provide empirical evidence of the effects of two roofing work-related factors namely, roof slope and kneeling working posture, on the development of knee MSDs among construction roofers. These two factors were assessed as potential to increase knee MSD risks in roofing by evaluating the awkward knee rotations and heightened activation of knee postural muscles that might occur in sloped-shingle installation. Moreover, a novel ranking-based ergonomic risk analysis method was developed to identify the riskiest working phase in the sloped-shingle installation operation. In addition, a data fusion method was developed for treating multiple incomplete experimental risk related datasets that would affect the accuracy of risk assessments due to human and technology-induced errors during experimental data collection. The findings revealed that roof slope, working posture and their interaction have significant impacts on developing knee MSDs among roofers. Knees are likely to have increased exposure to MSD risks during placing and nailing shingles on sloped roof surfaces. The established data fusion method has been proven feasible in handling up to 40% missing data in MSD risk-related datasets. The contributions lie in enhanced understanding of the physical risk exposures of roofers\u27 knee MSDs and creation of the ranking-based ergonomic analysis method and the fusion method that will help improve the MSD risk assessment in construction. In the long run, these outcomes will help develop new knee joint biomechanical models, effective interventions, and education and training materials that will improve the workplace to promote health and safety of roofers

    A virtual hand assessment system for efficient outcome measures of hand rehabilitation

    Get PDF
    Previously held under moratorium from 1st December 2016 until 1st December 2021.Hand rehabilitation is an extremely complex and critical process in the medical rehabilitation field. This is mainly due to the high articulation of the hand functionality. Recent research has focused on employing new technologies, such as robotics and system control, in order to improve the precision and efficiency of the standard clinical methods used in hand rehabilitation. However, the designs of these devices were either oriented toward a particular hand injury or heavily dependent on subjective assessment techniques to evaluate the progress. These limitations reduce the efficiency of the hand rehabilitation devices by providing less effective results for restoring the lost functionalities of the dysfunctional hands. In this project, a novel technological solution and efficient hand assessment system is produced that can objectively measure the restoration outcome and, dynamically, evaluate its performance. The proposed system uses a data glove sensorial device to measure the multiple ranges of motion for the hand joints, and a Virtual Reality system to return an illustrative and safe visual assistance environment that can self-adjust with the subject’s performance. The system application implements an original finger performance measurement method for analysing the various hand functionalities. This is achieved by extracting the multiple features of the hand digits’ motions; such as speed, consistency of finger movements and stability during the hold positions. Furthermore, an advanced data glove calibration method was developed and implemented in order to accurately manipulate the virtual hand model and calculate the hand kinematic movements in compliance with the biomechanical structure of the hand. The experimental studies were performed on a controlled group of 10 healthy subjects (25 to 42 years age). The results showed intra-subject reliability between the trials (average of crosscorrelation ρ = 0.7), inter-subject repeatability across the subject’s performance (p > 0.01 for the session with real objects and with few departures in some of the virtual reality sessions). In addition, the finger performance values were found to be very efficient in detecting the multiple elements of the fingers’ performance including the load effect on the forearm. Moreover, the electromyography measurements, in the virtual reality sessions, showed high sensitivity in detecting the tremor effect (the mean power frequency difference on the right Vextensor digitorum muscle is 176 Hz). Also, the finger performance values for the virtual reality sessions have the same average distance as the real life sessions (RSQ =0.07). The system, besides offering an efficient and quantitative evaluation of hand performance, it was proven compatible with different hand rehabilitation techniques where it can outline the primarily affected parts in the hand dysfunction. It also can be easily adjusted to comply with the subject’s specifications and clinical hand assessment procedures to autonomously detect the classification task events and analyse them with high reliability. The developed system is also adaptable with different disciplines’ involvements, other than the hand rehabilitation, such as ergonomic studies, hand robot control, brain-computer interface and various fields involving hand control.Hand rehabilitation is an extremely complex and critical process in the medical rehabilitation field. This is mainly due to the high articulation of the hand functionality. Recent research has focused on employing new technologies, such as robotics and system control, in order to improve the precision and efficiency of the standard clinical methods used in hand rehabilitation. However, the designs of these devices were either oriented toward a particular hand injury or heavily dependent on subjective assessment techniques to evaluate the progress. These limitations reduce the efficiency of the hand rehabilitation devices by providing less effective results for restoring the lost functionalities of the dysfunctional hands. In this project, a novel technological solution and efficient hand assessment system is produced that can objectively measure the restoration outcome and, dynamically, evaluate its performance. The proposed system uses a data glove sensorial device to measure the multiple ranges of motion for the hand joints, and a Virtual Reality system to return an illustrative and safe visual assistance environment that can self-adjust with the subject’s performance. The system application implements an original finger performance measurement method for analysing the various hand functionalities. This is achieved by extracting the multiple features of the hand digits’ motions; such as speed, consistency of finger movements and stability during the hold positions. Furthermore, an advanced data glove calibration method was developed and implemented in order to accurately manipulate the virtual hand model and calculate the hand kinematic movements in compliance with the biomechanical structure of the hand. The experimental studies were performed on a controlled group of 10 healthy subjects (25 to 42 years age). The results showed intra-subject reliability between the trials (average of crosscorrelation ρ = 0.7), inter-subject repeatability across the subject’s performance (p > 0.01 for the session with real objects and with few departures in some of the virtual reality sessions). In addition, the finger performance values were found to be very efficient in detecting the multiple elements of the fingers’ performance including the load effect on the forearm. Moreover, the electromyography measurements, in the virtual reality sessions, showed high sensitivity in detecting the tremor effect (the mean power frequency difference on the right Vextensor digitorum muscle is 176 Hz). Also, the finger performance values for the virtual reality sessions have the same average distance as the real life sessions (RSQ =0.07). The system, besides offering an efficient and quantitative evaluation of hand performance, it was proven compatible with different hand rehabilitation techniques where it can outline the primarily affected parts in the hand dysfunction. It also can be easily adjusted to comply with the subject’s specifications and clinical hand assessment procedures to autonomously detect the classification task events and analyse them with high reliability. The developed system is also adaptable with different disciplines’ involvements, other than the hand rehabilitation, such as ergonomic studies, hand robot control, brain-computer interface and various fields involving hand control

    Real-time robustness evaluation of regression based myoelectric control against arm position change and donning/doffing

    Get PDF
    There are some practical factors, such as arm position change and donning/doffing, which prevent robust myoelectric control. The objective of this study is to precisely characterize the impacts of the two representative factors on myoelectric controllability in practical control situations, thereby providing useful references that can be potentially used to find better solutions for clinically reliable myoelectric control. To this end, a real-time target acquisition task was performed by fourteen subjects including one individual with congenital upper-limb deficiency, where the impacts of arm position change, donning/doffing and a combination of both factors on control performance was systematically evaluated. The changes in online performance were examined with seven different performance metrics to comprehensively evaluate various aspects of myoelectric controllability. As a result, arm position change significantly affects offline prediction accuracy, but not online control performance due to real-time feedback, thereby showing no significant correlation between offline and online performance. Donning/doffing was still problematic in online control conditions. It was further observed that no benefit was attained when using a control model trained with multiple position data in terms of arm position change, and the degree of electrode shift caused by donning/doffing was not severely associated with the degree of performance loss under practical conditions (around 1 cm electrode shift). Since this study is the first to concurrently investigate the impacts of arm position change and donning/doffing in practical myoelectric control situations, all findings of this study provide new insights into robust myoelectric control with respect to arm position change and donning/doffing.DFG, 325093850, Open Access Publizieren 2017 - 2018 / Technische Universität Berli

    Fused mechanomyography and inertial measurement for human-robot interface

    Get PDF
    Human-Machine Interfaces (HMI) are the technology through which we interact with the ever-increasing quantity of smart devices surrounding us. The fundamental goal of an HMI is to facilitate robot control through uniting a human operator as the supervisor with a machine as the task executor. Sensors, actuators, and onboard intelligence have not reached the point where robotic manipulators may function with complete autonomy and therefore some form of HMI is still necessary in unstructured environments. These may include environments where direct human action is undesirable or infeasible, and situations where a robot must assist and/or interface with people. Contemporary literature has introduced concepts such as body-worn mechanical devices, instrumented gloves, inertial or electromagnetic motion tracking sensors on the arms, head, or legs, electroencephalographic (EEG) brain activity sensors, electromyographic (EMG) muscular activity sensors and camera-based (vision) interfaces to recognize hand gestures and/or track arm motions for assessment of operator intent and generation of robotic control signals. While these developments offer a wealth of future potential their utility has been largely restricted to laboratory demonstrations in controlled environments due to issues such as lack of portability and robustness and an inability to extract operator intent for both arm and hand motion. Wearable physiological sensors hold particular promise for capture of human intent/command. EMG-based gesture recognition systems in particular have received significant attention in recent literature. As wearable pervasive devices, they offer benefits over camera or physical input systems in that they neither inhibit the user physically nor constrain the user to a location where the sensors are deployed. Despite these benefits, EMG alone has yet to demonstrate the capacity to recognize both gross movement (e.g. arm motion) and finer grasping (e.g. hand movement). As such, many researchers have proposed fusing muscle activity (EMG) and motion tracking e.g. (inertial measurement) to combine arm motion and grasp intent as HMI input for manipulator control. However, such work has arguably reached a plateau since EMG suffers from interference from environmental factors which cause signal degradation over time, demands an electrical connection with the skin, and has not demonstrated the capacity to function out of controlled environments for long periods of time. This thesis proposes a new form of gesture-based interface utilising a novel combination of inertial measurement units (IMUs) and mechanomyography sensors (MMGs). The modular system permits numerous configurations of IMU to derive body kinematics in real-time and uses this to convert arm movements into control signals. Additionally, bands containing six mechanomyography sensors were used to observe muscular contractions in the forearm which are generated using specific hand motions. This combination of continuous and discrete control signals allows a large variety of smart devices to be controlled. Several methods of pattern recognition were implemented to provide accurate decoding of the mechanomyographic information, including Linear Discriminant Analysis and Support Vector Machines. Based on these techniques, accuracies of 94.5% and 94.6% respectively were achieved for 12 gesture classification. In real-time tests, accuracies of 95.6% were achieved in 5 gesture classification. It has previously been noted that MMG sensors are susceptible to motion induced interference. The thesis also established that arm pose also changes the measured signal. This thesis introduces a new method of fusing of IMU and MMG to provide a classification that is robust to both of these sources of interference. Additionally, an improvement in orientation estimation, and a new orientation estimation algorithm are proposed. These improvements to the robustness of the system provide the first solution that is able to reliably track both motion and muscle activity for extended periods of time for HMI outside a clinical environment. Application in robot teleoperation in both real-world and virtual environments were explored. With multiple degrees of freedom, robot teleoperation provides an ideal test platform for HMI devices, since it requires a combination of continuous and discrete control signals. The field of prosthetics also represents a unique challenge for HMI applications. In an ideal situation, the sensor suite should be capable of detecting the muscular activity in the residual limb which is naturally indicative of intent to perform a specific hand pose and trigger this post in the prosthetic device. Dynamic environmental conditions within a socket such as skin impedance have delayed the translation of gesture control systems into prosthetic devices, however mechanomyography sensors are unaffected by such issues. There is huge potential for a system like this to be utilised as a controller as ubiquitous computing systems become more prevalent, and as the desire for a simple, universal interface increases. Such systems have the potential to impact significantly on the quality of life of prosthetic users and others.Open Acces
    corecore