10 research outputs found

    Platform Portable Anthropomorphic Grasping with the Bielefeld 20-DOF Shadow and 9-DOF TUM Hand

    Get PDF
    Röthling F, Haschke R, Steil JJ, Ritter H. Platform Portable Anthropomorphic Grasping with the Bielefeld 20-DOF Shadow and 9-DOF TUM Hand. In: Proc. Int. Conf. on Intelligent Robots and Systems (IROS). IEEE; 2007: 2951-2956

    Real robot hand grasping using simulation-based optimisation of portable strategies

    Get PDF
    Röthling F. Real robot hand grasping using simulation-based optimisation of portable strategies. Bielefeld (Germany): Bielefeld University; 2007.This thesis presents a complete line of biologically motivated approaches for providing robot hands with grasping capabilities. These approaches comprise a model of robot grasping, a grasp synthesis, a grasp strategy, and a grasp taxonomy. The grasp types defined by the taxonomy are fairly easy to realise in a robot hand setup when following the development rules proposed in this thesis. The approach to grasp synthesis stresses the target grasp posture, providing the opportunity for optimising the realised grasp types for finger closure trajectories. The target grasp is optimised by using an evolutionary algorithm after the pre-grasp is optimised for contact simultaneity in the first step of the optimisation strategy presented, which is substantiated by an experiment on human grasping. For optimisation, grasps are evaluated within a physics-based simulator by applying a grasp stability measure that is based on a standard grasp quality measure. By implementing the grasp strategy and the optimisation strategy on one robot hand setup (including the three-fingered 9-DOF hydraulic TUM Hand) and porting these strategies into the second setup (including the very dextrous 20-DOF pneumatic Shadow Hand), this thesis shows that these strategies are realisable on, and portable among, totally different robot systems. The strategies proposed are robust against limited positioning accuracy of the finger joints and uncertainties about object position and orientation. Grasping success is evaluated with the real hands by comparative experiments performing a benchmark test on 21 everyday objects

    Situated robot learning for multi-modal instruction and imitation of grasping

    No full text
    Steil JJ, Röthling F, Haschke R, Ritter H. Situated robot learning for multi-modal instruction and imitation of grasping. Robotics and Autonomous Systems. 2004;47(2-3):129-141

    Platform Portable Anthropomorphic Grasping with the Bielefeld

    Get PDF
    Abstract — We present a strategy for grasping of real world objects with two anthropomorphic hands, the three-fingered 9-DOF hydraulic TUM and the very dextrous 20-DOF pneumatic Bielefeld Shadow Hand. Our approach to grasping is based on a reach–pre-grasp–grasp scheme loosely motivated by human grasping. We comparatively describe the two robot setups, the control schemes, and the grasp type determination. We show that the grasp strategy can robustly cope with inaccurate control and object variation. We demonstrate that it can be ported among platforms with minor modifications. Grasping success is evaluated by comparative experiments performing a benchmark test on 21 everyday objects. I

    Learning issues in a multi-modal robot-instruction scenario

    No full text
    Steil JJ, Röthling F, Haschke R, Ritter H. Learning issues in a multi-modal robot-instruction scenario. In: Workshop on Imitation Learning. 2003

    Platform Portable Anthropomorphic Grasping with the Bielefeld 20-DOF Shadow and 9-DOF TUM Hand

    No full text
    Abstract — We present a strategy for grasping of real world objects with two anthropomorphic hands, the three-fingered 9-DOF hydraulic TUM and the very dextrous 20-DOF pneumatic Bielefeld Shadow Hand. Our approach to grasping is based on a reach–pre-grasp–grasp scheme loosely motivated by human grasping. We comparatively describe the two robot setups, the control schemes, and the grasp type determination. We show that the grasp strategy can robustly cope with inaccurate control and object variation. We demonstrate that it can be ported among platforms with minor modifications. Grasping success is evaluated by comparative experiments performing a benchmark test on 21 everyday objects. I

    Neural Architectures for Robotic Intelligence

    No full text
    Ritter H, Steil JJ, Nölker C, Röthling F, McGuire PC. Neural Architectures for Robotic Intelligence. Reviews in the Neurosciences. 2003;14(1-2):121-143

    Manual Intelligence as a Rosetta Stone for Robot Cognition

    No full text
    Ritter H, Haschke R, Röthling F, Steil JJ. Manual Intelligence as a Rosetta Stone for Robot Cognition. Presented at the ISRR, Hiroshima

    Neural Architectures for Robotic Intelligence

    No full text
    Ritter H, Steil JJ, Nölker C, Röthling F, McGuire PC. Neural Architectures for Robotic Intelligence. Reviews in the Neurosciences. 2003;14(1-2):121-143

    Multi-Modal Human-Machine Communication for Instructing Robot Grasping Tasks

    No full text
    McGuire PC, Fritsch J, Ritter H, et al. Multi-Modal Human-Machine Communication for Instructing Robot Grasping Tasks. In: Proc. Int. Conf. Intelligent Robotis and Systems. IEEE; 2002: 1082-1089.A major challenge for the realization of intelligent robots is to supply them with cognitive abilities in order to allow ordinary users to program them easily and intuitively. One way of such programming is teaching work tasks by interactive demonstration. To make this effective and convenient for the user, the machine must be capable to establish a common focus of attention and be able to use and integrate spoken instructions, visual perceptions, and non-verbal clues like gestural commands. We report progress in building a hybrid architecture that combines statistical methods, neural networks, and finite state machines into an integrated system for instructing grasping tasks by man-machine interaction. The system combines the GRAVIS-robot for visual attention and gestural instruction with an intelligent interface for speech recognition and linguistic interpretation, and an modality fusion module to allow multi-modal task-oriented man-machine communication with respect to dextrous robot manipulation of objects
    corecore