20 research outputs found

    Active End-Effector Pose Selection for Tactile Object Recognition through Monte Carlo Tree Search

    Full text link
    This paper considers the problem of active object recognition using touch only. The focus is on adaptively selecting a sequence of wrist poses that achieves accurate recognition by enclosure grasps. It seeks to minimize the number of touches and maximize recognition confidence. The actions are formulated as wrist poses relative to each other, making the algorithm independent of absolute workspace coordinates. The optimal sequence is approximated by Monte Carlo tree search. We demonstrate results in a physics engine and on a real robot. In the physics engine, most object instances were recognized in at most 16 grasps. On a real robot, our method recognized objects in 2--9 grasps and outperformed a greedy baseline.Comment: Accepted to International Conference on Intelligent Robots and Systems (IROS) 201

    Active End-Effector Pose Selection for Tactile Object Recognition through Monte Carlo Tree Search

    Full text link
    This paper considers the problem of active object recognition using touch only. The focus is on adaptively selecting a sequence of wrist poses that achieves accurate recognition by enclosure grasps. It seeks to minimize the number of touches and maximize recognition confidence. The actions are formulated as wrist poses relative to each other, making the algorithm independent of absolute workspace coordinates. The optimal sequence is approximated by Monte Carlo tree search. We demonstrate results in a physics engine and on a real robot. In the physics engine, most object instances were recognized in at most 16 grasps. On a real robot, our method recognized objects in 2--9 grasps and outperformed a greedy baseline.Comment: Accepted to International Conference on Intelligent Robots and Systems (IROS) 201

    Learning epistemic actions in model-free memory-free reinforcement learning: experiments with a neuro-robotic model

    Get PDF
    Passive sensory processing is often insufficient to guide biological organisms in complex environments. Rather, behaviourally relevant information can be accessed by performing so-called epistemic actions that explicitly aim at unveiling hidden information. However, it is still unclear how an autonomous agent can learn epistemic actions and how it can use them adaptively. In this work, we propose a definition of epistemic actions for POMDPs that derive from their characterizations in cognitive science and classical planning literature. We give theoretical insights about how partial observability and epistemic actions can affct the learning process and performance in the extreme conditions of model-free and memory-free reinforcement learning where hidden information cannot be represented. We finally investigate these concepts using an integrated eye-arm neural architecture for robot control, which can use its effctors to execute epistemic actions and can exploit the actively gathered information to effiently accomplish a seek-and-reach task

    Sequential Trajectory Re-planning with Tactile Information Gain for Dexterous Grasping under Object-pose Uncertainty

    Get PDF
    Abstract — Dexterous grasping of objects with uncertain pose is a hard unsolved problem in robotics. This paper solves this problem using information gain re-planning. First we show how tactile information, acquired during a failed attempt to grasp an object can be used to refine the estimate of that object’s pose. Second, we show how this information can be used to replan new reach to grasp trajectories for successive grasp attempts. Finally we show how reach-to-grasp trajectories can be modified, so that they maximise the expected tactile information gain, while simultaneously delivering the hand to the grasp configuration that is most likely to succeed. Our main novel outcome is thus to enable tactile information gain planning for Dexterous, high degree of freedom (DoFs) manipulators. We achieve this using a combination of information gain planning, hierarchical probabilistic roadmap planning, and belief updating from tactile sensors for objects with non-Gaussian pose uncertainty in 6 dimensions. The method is demonstrated in trials with simulated robots. Sequential replanning is shown to achieve a greater success rate than single grasp attempts, and trajectories that maximise information gain require fewer re-planning iterations than conventional planning methods before a grasp is achieved. I

    Robust grasping under object pose uncertainty

    Get PDF
    This paper presents a decision-theoretic approach to problems that require accurate placement of a robot relative to an object of known shape, such as grasping for assembly or tool use. The decision process is applied to a robot hand with tactile sensors, to localize the object on a table and ultimately achieve a target placement by selecting among a parameterized set of grasping and information-gathering trajectories. The process is demonstrated in simulation and on a real robot. This work has been previously presented in Hsiao et al. (Workshop on Algorithmic Foundations of Robotics (WAFR), 2008; Robotics Science and Systems (RSS), 2010) and Hsiao (Relatively robust grasping, Ph.D. thesis, Massachusetts Institute of Technology, 2009).National Science Foundation (U.S.) (Grant 0712012

    Dual execution of optimized contact interaction trajectories

    Full text link
    corecore