62 research outputs found

    Intuitive Human-Machine Interfaces for Non-Anthropomorphic Robotic Hands

    Get PDF
    As robots become more prevalent in our everyday lives, both in our workplaces and in our homes, it becomes increasingly likely that people who are not experts in robotics will be asked to interface with robotic devices. It is therefore important to develop robotic controls that are intuitive and easy for novices to use. Robotic hands, in particular, are very useful, but their high dimensionality makes creating intuitive human-machine interfaces for them complex. In this dissertation, we study the control of non-anthropomorphic robotic hands by non-roboticists in two contexts: collaborative manipulation and assistive robotics. In the field of collaborative manipulation, the human and the robot work side by side as independent agents. Teleoperation allows the human to assist the robot when autonomous grasping is not able to deal sufficiently well with corner cases or cannot operate fast enough. Using the teleoperator’s hand as an input device can provide an intuitive control method, but finding a mapping between a human hand and a non-anthropomorphic robot hand can be difficult, due to the hands’ dissimilar kinematics. In this dissertation, we seek to create a mapping between the human hand and a fully actuated, non-anthropomorphic robot hand that is intuitive enough to enable effective real-time teleoperation, even for novice users. We propose a low-dimensional and continuous teleoperation subspace which can be used as an intermediary for mapping between different hand pose spaces. We first propose the general concept of the subspace, its properties and the variables needed to map from the human hand to a robot hand. We then propose three ways to populate the teleoperation subspace mapping. Two of our mappings use a dataglove to harvest information about the user's hand. We define the mapping between joint space and teleoperation subspace with an empirical definition, which requires a person to define hand motions in an intuitive, hand-specific way, and with an algorithmic definition, which is kinematically independent, and uses objects to define the subspace. Our third mapping for the teleoperation subspace uses forearm electromyography (EMG) as a control input. Assistive orthotics is another area of robotics where human-machine interfaces are critical, since, in this field, the robot is attached to the hand of the human user. In this case, the goal is for the robot to assist the human with movements they would not otherwise be able to achieve. Orthotics can improve the quality of life of people who do not have full use of their hands. Human-machine interfaces for assistive hand orthotics that use EMG signals from the affected forearm as input are intuitive and repeated use can strengthen the muscles of the user's affected arm. In this dissertation, we seek to create an EMG based control for an orthotic device used by people who have had a stroke. We would like our control to enable functional motions when used in conjunction with a orthosis and to be robust to changes in the input signal. We propose a control for a wearable hand orthosis which uses an easy to don, commodity forearm EMG band. We develop an supervised algorithm to detect a user’s intent to open and close their hand, and pair this algorithm with a training protocol which makes our intent detection robust to changes in the input signal. We show that this algorithm, when used in conjunction with an orthosis over several weeks, can improve distal function in users. Additionally, we propose two semi-supervised intent detection algorithms designed to keep our control robust to changes in the input data while reducing the length and frequency of our training protocol

    Bimanual Motor Strategies and Handedness Role During Human-Exoskeleton Haptic Interaction

    Full text link
    Bimanual object manipulation involves multiple visuo-haptic sensory feedbacks arising from the interaction with the environment that are managed from the central nervous system and consequently translated in motor commands. Kinematic strategies that occur during bimanual coupled tasks are still a scientific debate despite modern advances in haptics and robotics. Current technologies may have the potential to provide realistic scenarios involving the entire upper limb extremities during multi-joint movements but are not yet exploited to their full potential. The present study explores how hands dynamically interact when manipulating a shared object through the use of two impedance-controlled exoskeletons programmed to simulate bimanually coupled manipulation of virtual objects. We enrolled twenty-six participants (2 groups: right-handed and left-handed) who were requested to use both hands to grab simulated objects across the robot workspace and place them in specific locations. The virtual objects were rendered with different dynamic proprieties and textures influencing the manipulation strategies to complete the tasks. Results revealed that the roles of hands are related to the movement direction, the haptic features, and the handedness preference. Outcomes suggested that the haptic feedback affects bimanual strategies depending on the movement direction. However, left-handers show better control of the force applied between the two hands, probably due to environmental pressures for right-handed manipulations

    Congruency of eye movement metrics across motor simulation states: implications for motor (re)learning.

    Get PDF
    This thesis contains a series of studies that report, for the first time, the congruence between physical, imagined and observed movement though a range of eye movement markers as a test of Jeannerod’s Simulation Theory (Jeannerod, 1994, 2001). First, the eye gaze metrics of healthy young individuals across all the action-related processes in a single paradigm is reported. The finding from this study suggested a temporal and spatial similarity between action execution (AE) and action observation (AO), and a spatial similarity between AE and motor imagery (MI). These findings suggest that AO could be used to simulate actions that involve a critical temporal element. Second, the influence of early ageing on gaze metrics was examined. The findings from this study indicated that whilst the profile of metrics for AE showed age-related decline, it was less evident in AO and MI although there was evidence of some age-related decline across all the three processes. Third, the influence of visual perspective on eye movements during movement simulation is reported. The data analysis in this study was novel and allowed, for the first time, eye gaze to be used to quantify MI and highlighted the importance of social gaze in AO and its absence in MI. Taken together, the finding that some eye metrics are preserved in more covert behaviours provides support for the efficacy of (re)learning optimal eye gaze strategies through AO- and MI-supported movement-based interventions for older adults with movement dysfunction. Therefore, in the final study, the development of a fully-integrated AE-AO-MI toolkit is reported. A new, App-based approach to the integration of movement simulation in rehabilitation is described in detail. Twenty years after he first proposed his Simulation Theory of MI the novel findings from this programme of work provide substantial support for the concept. This thesis highlights the advantage of using advanced eye gaze technology as an important marker to inform the on-going debate on the extent of the neural substrate sharedness as the central tenet to Simulation Theory. The findings of the studies will make an important impact on the use of simulation procedures for motor relearning
    • …
    corecore