953 research outputs found

    Data-Driven Grasp Synthesis - A Survey

    Full text link
    We review the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps. We divide the approaches into three groups based on whether they synthesize grasps for known, familiar or unknown objects. This structure allows us to identify common object representations and perceptual processes that facilitate the employed data-driven grasp synthesis technique. In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation. In the case of familiar objects, the techniques use some form of a similarity matching to a set of previously encountered objects. Finally for the approaches dealing with unknown objects, the core part is the extraction of specific features that are indicative of good grasps. Our survey provides an overview of the different methodologies and discusses open problems in the area of robot grasping. We also draw a parallel to the classical approaches that rely on analytic formulations.Comment: 20 pages, 30 Figures, submitted to IEEE Transactions on Robotic

    What Can I Do Around Here? Deep Functional Scene Understanding for Cognitive Robots

    Full text link
    For robots that have the capability to interact with the physical environment through their end effectors, understanding the surrounding scenes is not merely a task of image classification or object recognition. To perform actual tasks, it is critical for the robot to have a functional understanding of the visual scene. Here, we address the problem of localizing and recognition of functional areas from an arbitrary indoor scene, formulated as a two-stage deep learning based detection pipeline. A new scene functionality testing-bed, which is complied from two publicly available indoor scene datasets, is used for evaluation. Our method is evaluated quantitatively on the new dataset, demonstrating the ability to perform efficient recognition of functional areas from arbitrary indoor scenes. We also demonstrate that our detection model can be generalized onto novel indoor scenes by cross validating it with the images from two different datasets

    Affordance-based control of a variable-autonomy telerobot

    Get PDF
    Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2012.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student-submitted PDF version of thesis. "September 2012."Includes bibliographical references (pages 37-38).Most robot platforms operate in one of two modes: full autonomy, usually in the lab; or low-level teleoperation, usually in the field. Full autonomy is currently realizable only in narrow domains of robotics-like mapping an environment. Tedious teleoperation/joystick control is typical in military applications, like complex manipulation and navigation with bomb-disposal robots. This thesis describes a robot "surrogate" with an intermediate and variable level of autonomy. The robot surrogate accomplishes manipulation tasks by taking guidance and planning suggestions from a human "supervisor." The surrogate does not engage in high-level reasoning, but only in intermediate-level planning and low-level control. The human supervisor supplies the high-level reasoning and some intermediate control-leaving execution details for the surrogate. The supervisor supplies world knowledge and planning suggestions by "drawing" on a 3D view of the world constructed from sensor data. The surrogate conveys its own model of the world to the supervisor, to enable mental-model sharing between supervisor and surrogate. The contributions of this thesis include: (1) A novel partitioning of the manipulation task load between supervisor and surrogate, which side-steps problems in autonomous robotics by replacing them with problems in interfaces, perception, planning, control, and human-robot trust; and (2) The algorithms and software designed and built for mental model-sharing and supervisor-assisted manipulation. Using this system, we are able to command the PR2 to manipulate simple objects incorporating either a single revolute or prismatic joint.by Michael Fleder.M. Eng

    Affordances in Psychology, Neuroscience, and Robotics: A Survey

    Get PDF
    The concept of affordances appeared in psychology during the late 60s as an alternative perspective on the visual perception of the environment. It was revolutionary in the intuition that the way living beings perceive the world is deeply influenced by the actions they are able to perform. Then, across the last 40 years, it has influenced many applied fields, e.g., design, human-computer interaction, computer vision, and robotics. In this paper, we offer a multidisciplinary perspective on the notion of affordances. We first discuss the main definitions and formalizations of the affordance theory, then we report the most significant evidence in psychology and neuroscience that support it, and finally we review the most relevant applications of this concept in robotics
    • …
    corecore