1,047 research outputs found

    Data-Driven Grasp Synthesis - A Survey

    Full text link
    We review the work on data-driven grasp synthesis and the methodologies for sampling and ranking candidate grasps. We divide the approaches into three groups based on whether they synthesize grasps for known, familiar or unknown objects. This structure allows us to identify common object representations and perceptual processes that facilitate the employed data-driven grasp synthesis technique. In the case of known objects, we concentrate on the approaches that are based on object recognition and pose estimation. In the case of familiar objects, the techniques use some form of a similarity matching to a set of previously encountered objects. Finally for the approaches dealing with unknown objects, the core part is the extraction of specific features that are indicative of good grasps. Our survey provides an overview of the different methodologies and discusses open problems in the area of robot grasping. We also draw a parallel to the classical approaches that rely on analytic formulations.Comment: 20 pages, 30 Figures, submitted to IEEE Transactions on Robotic

    A Continuous Grasp Representation for the Imitation Learning of Grasps on Humanoid Robots

    Get PDF
    Models and methods are presented which enable a humanoid robot to learn reusable, adaptive grasping skills. Mechanisms and principles in human grasp behavior are studied. The findings are used to develop a grasp representation capable of retaining specific motion characteristics and of adapting to different objects and tasks. Based on the representation a framework is proposed which enables the robot to observe human grasping, learn grasp representations, and infer executable grasping actions

    Robotic Grasping of Large Objects for Collaborative Manipulation

    Get PDF
    In near future, robots are envisioned to work alongside humans in professional and domestic environments without significant restructuring of workspace. Robotic systems in such setups must be adept at observation, analysis and rational decision making. To coexist in an environment, humans and robots will need to interact and cooperate for multiple tasks. A fundamental such task is the manipulation of large objects in work environments which requires cooperation between multiple manipulating agents for load sharing. Collaborative manipulation has been studied in the literature with the focus on multi-agent planning and control strategies. However, for a collaborative manipulation task, grasp planning also plays a pivotal role in cooperation and task completion. In this work, a novel approach is proposed for collaborative grasping and manipulation of large unknown objects. The manipulation task was defined as a sequence of poses and expected external wrench acting on the target object. In a two-agent manipulation task, the proposed approach selects a grasp for the second agent after observing the grasp location of the first agent. The solution is computed in a way that it minimizes the grasp wrenches by load sharing between both agents. To verify the proposed methodology, an online system for human-robot manipulation of unknown objects was developed. The system utilized depth information from a fixed Kinect sensor for perception and decision making for a human-robot collaborative lift-up. Experiments with multiple objects substantiated that the proposed method results in an optimal load sharing despite limited information and partial observability

    Master of Science

    Get PDF
    thesisIn this work we consider task-based planning in uncertainty. To make progress in this problem, we propose an end-to-end method that makes progress toward the unification of perception and manipulation. Critical for this unification is the geometric primitive. A geometric primitive is a 3D geometry that can be fit to a single view from a 3D image. Geometric primitives are a consistent structure in many scenes, and by leveraging this, perceptual tasks such as segmentation, localization, and recognition can be solved. Sharing this information between these subroutines also makes the method computationally efficient. Geometric primitives can be used to define a set of actions the robot can use to influence the world. Leveraging the rich 3D information in geometric primitives allows the designer to develop actions with a high chance of success. In this work, we consider a pick-and-place action, parameterized by the object and scene constraints. The design of the perceptual capabilities and actions is independent of the task given to the robot, giving the robot more versatility to complete a range of tasks. With a large number of available actions, the robot needs to select which action the robot performs. We propose a task-specific reward function to determine the next-best action for the robot to complete the task. A key insight into making the action selection tractable is reasoning about the occluded regions of the scene. We propose to not reason about what could be in the occluded regions, but instead to treat the occluded regions as parts of the scene to explore. Defining reward functions that encourage this exploration while balancing trying to solve the given task gives the robot more versatility to perform many different tasks. Reasoning about occlusion in this way also makes actions in the scene more robust to scene uncertainty and increases the computational efficiency of the method overall. In this work, we show results for segmentation of geometric primitives on real data, and discuss problems with fitting their parameters. While positive segmentation results are shown, there are problems with fitting consistent parameters to the geometric primitives. We also present simulation results showing the action selection process solving a singulation task. We show that our method is able to perform this task in several scenes with varying levels of complexity. We compare against selecting actions at random, and show our method consistently takes fewer actions to solve the scene
    • …
    corecore