199 research outputs found

    Human to robot hand motion mapping methods: review and classification

    Get PDF
    In this article, the variety of approaches proposed in literature to address the problem of mapping human to robot hand motions are summarized and discussed. We particularly attempt to organize under macro-categories the great quantity of presented methods, that are often difficult to be seen from a general point of view due to different fields of application, specific use of algorithms, terminology and declared goals of the mappings. Firstly, a brief historical overview is reported, in order to provide a look on the emergence of the human to robot hand mapping problem as a both conceptual and analytical challenge that is still open nowadays. Thereafter, the survey mainly focuses on a classification of modern mapping methods under six categories: direct joint, direct Cartesian, taskoriented, dimensionality reduction based, pose recognition based and hybrid mappings. For each of these categories, the general view that associates the related reported studies is provided, and representative references are highlighted. Finally, a concluding discussion along with the authors’ point of view regarding future desirable trends are reported.This work was supported in part by the European Commission’s Horizon 2020 Framework Programme with the project REMODEL under Grant 870133 and in part by the Spanish Government under Grant PID2020-114819GB-I00.Peer ReviewedPostprint (published version

    Principal components analysis based control of a multi-dof underactuated prosthetic hand

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Functionality, controllability and cosmetics are the key issues to be addressed in order to accomplish a successful functional substitution of the human hand by means of a prosthesis. Not only the prosthesis should duplicate the human hand in shape, functionality, sensorization, perception and sense of body-belonging, but it should also be controlled as the natural one, in the most intuitive and undemanding way. At present, prosthetic hands are controlled by means of non-invasive interfaces based on electromyography (EMG). Driving a multi degrees of freedom (DoF) hand for achieving hand dexterity implies to selectively modulate many different EMG signals in order to make each joint move independently, and this could require significant cognitive effort to the user.</p> <p>Methods</p> <p>A Principal Components Analysis (PCA) based algorithm is used to drive a 16 DoFs underactuated prosthetic hand prototype (called CyberHand) with a two dimensional control input, in order to perform the three prehensile forms mostly used in Activities of Daily Living (ADLs). Such Principal Components set has been derived directly from the artificial hand by collecting its sensory data while performing 50 different grasps, and subsequently used for control.</p> <p>Results</p> <p>Trials have shown that two independent input signals can be successfully used to control the posture of a real robotic hand and that correct grasps (in terms of involved fingers, stability and posture) may be achieved.</p> <p>Conclusions</p> <p>This work demonstrates the effectiveness of a bio-inspired system successfully conjugating the advantages of an underactuated, anthropomorphic hand with a PCA-based control strategy, and opens up promising possibilities for the development of an intuitively controllable hand prosthesis.</p

    Intuitive Hand Teleoperation by Novice Operators Using a Continuous Teleoperation Subspace

    Full text link
    Human-in-the-loop manipulation is useful in when autonomous grasping is not able to deal sufficiently well with corner cases or cannot operate fast enough. Using the teleoperator's hand as an input device can provide an intuitive control method but requires mapping between pose spaces which may not be similar. We propose a low-dimensional and continuous teleoperation subspace which can be used as an intermediary for mapping between different hand pose spaces. We present an algorithm to project between pose space and teleoperation subspace. We use a non-anthropomorphic robot to experimentally prove that it is possible for teleoperation subspaces to effectively and intuitively enable teleoperation. In experiments, novice users completed pick and place tasks significantly faster using teleoperation subspace mapping than they did using state of the art teleoperation methods.Comment: ICRA 2018, 7 pages, 7 figures, 2 table

    Sparse Eigenmotions derived from daily life kinematics implemented on a dextrous robotic hand

    No full text
    Our hands are considered one of the most complex to control actuated systems, thus, emulating the manipulative skills of real hands is still an open challenge even in anthropomorphic robotic hand. While the action of the 4 long fingers and simple grasp motions through opposable thumbs have been successfully implemented in robotic designs, complex in-hand manipulation of objects was difficult to achieve. We take an approach grounded in data-driven extraction of control primitives from natural human behaviour to develop novel ways to understand the dexterity of hands. We collected hand kinematics datasets from natural, unconstrained human behaviour of daily life in 8 healthy in a studio flat environment. We then applied our Sparse Motion Decomposition approach to extract spatio-temporally localised modes of hand motion that are both time-scale and amplitude-scale invariant. These Sparse EigenMotions (SEMs)[1] form a sparse symbolic code that encodes continuous hand motions. We mechanically implemented the common SEMs on our novel dexterous robotic hand [2] in open-loop control. We report that without processing any feedback during grasp control, several of the SEMs resulted in stable grasps of different daily life objects. The finding that SEMs extracted from daily life implement stable grasps in open-loop control of dexterous hands, lends further support for our hypothesis the brain controls the hand using sparse control strategies

    Dimensionality reduction for hand-independent dexterous robotic grasping

    Get PDF
    In this paper, we build upon recent advances in neuroscience research which have shown that control of the human hand during grasping is dominated by movement in a configuration space of highly reduced dimensionality. We extend this concept to robotic hands and show how a similar dimensionality reduction can be defined for a number of different hand models. This framework can be used to derive planning algorithms that produce stable grasps even for highly complex hand designs. Furthermore, it offers a unified approach for controlling different hands, even if the kinematic structures of the models are significantly different. We illustrate these concepts by building a comprehensive grasp planner that can be used on a large variety of robotic hands under various constraints
    corecore