50,496 research outputs found

    Robotic Grasping of Large Objects for Collaborative Manipulation

    Get PDF
    In near future, robots are envisioned to work alongside humans in professional and domestic environments without significant restructuring of workspace. Robotic systems in such setups must be adept at observation, analysis and rational decision making. To coexist in an environment, humans and robots will need to interact and cooperate for multiple tasks. A fundamental such task is the manipulation of large objects in work environments which requires cooperation between multiple manipulating agents for load sharing. Collaborative manipulation has been studied in the literature with the focus on multi-agent planning and control strategies. However, for a collaborative manipulation task, grasp planning also plays a pivotal role in cooperation and task completion. In this work, a novel approach is proposed for collaborative grasping and manipulation of large unknown objects. The manipulation task was defined as a sequence of poses and expected external wrench acting on the target object. In a two-agent manipulation task, the proposed approach selects a grasp for the second agent after observing the grasp location of the first agent. The solution is computed in a way that it minimizes the grasp wrenches by load sharing between both agents. To verify the proposed methodology, an online system for human-robot manipulation of unknown objects was developed. The system utilized depth information from a fixed Kinect sensor for perception and decision making for a human-robot collaborative lift-up. Experiments with multiple objects substantiated that the proposed method results in an optimal load sharing despite limited information and partial observability

    Instance-wise Grasp Synthesis for Robotic Grasping

    Get PDF
    Generating high-quality instance-wise grasp configurations provides critical information of how to grasp specific objects in a multi-object environment and is of high importance for robot manipulation tasks. This work proposed a novel \textbf{S}ingle-\textbf{S}tage \textbf{G}rasp (SSG) synthesis network, which performs high-quality instance-wise grasp synthesis in a single stage: instance mask and grasp configurations are generated for each object simultaneously. Our method outperforms state-of-the-art on robotic grasp prediction based on the OCID-Grasp dataset, and performs competitively on the JACQUARD dataset. The benchmarking results showed significant improvements compared to the baseline on the accuracy of generated grasp configurations. The performance of the proposed method has been validated through both extensive simulations and real robot experiments for three tasks including single object pick-and-place, grasp synthesis in cluttered environments and table cleaning task

    Adaptive Motion Planning for Multi-fingered Functional Grasp via Force Feedback

    Full text link
    Enabling multi-fingered robots to grasp and manipulate objects with human-like dexterity is especially challenging during the dynamic, continuous hand-object interactions. Closed-loop feedback control is essential for dexterous hands to dynamically finetune hand poses when performing precise functional grasps. This work proposes an adaptive motion planning method based on deep reinforcement learning to adjust grasping poses according to real-time feedback from joint torques from pre-grasp to goal grasp. We find the multi-joint torques of the dexterous hand can sense object positions through contacts and collisions, enabling real-time adjustment of grasps to generate varying grasping trajectories for objects in different positions. In our experiments, the performance gap with and without force feedback reveals the important role of force feedback in adaptive manipulation. Our approach utilizing force feedback preliminarily exhibits human-like flexibility, adaptability, and precision.Comment: 8 pages,7 figure

    DVGG: Deep Variational Grasp Generation for Dextrous Manipulation

    Full text link
    Grasping with anthropomorphic robotic hands involves much more hand-object interactions compared to parallel-jaw grippers. Modeling hand-object interactions is essential to the study of multi-finger hand dextrous manipulation. This work presents DVGG, an efficient grasp generation network that takes single-view observation as input and predicts high-quality grasp configurations for unknown objects. In general, our generative model consists of three components: 1) Point cloud completion for the target object based on the partial observation; 2) Diverse sets of grasps generation given the complete point cloud; 3) Iterative grasp pose refinement for physically plausible grasp optimization. To train our model, we build a large-scale grasping dataset that contains about 300 common object models with 1.5M annotated grasps in simulation. Experiments in simulation show that our model can predict robust grasp poses with a wide variety and high success rate. Real robot platform experiments demonstrate that the model trained on our dataset performs well in the real world. Remarkably, our method achieves a grasp success rate of 70.7\% for novel objects in the real robot platform, which is a significant improvement over the baseline methods.Comment: Accepted by Robotics and Automation Letters (RA-L, 2021

    Robotic object manipulation via hierarchical and affordance learning

    Get PDF
    With the rise of computation power and machine learning techniques, a shift of research interest is happening to roboticists. Against this background, this thesis seeks to develop or enhance learning-based grasping and manipulation systems. This thesis first proposes a method, named A2, to improve the sample efficiency of end-to-end deep reinforcement learning algorithms for long horizon, multi-step and sparse reward manipulation. The named A2 comes from the fact that it uses Abstract demonstrations to guide the learning process and Adaptively adjusts exploration according to online performances. Experiments in a series of multi-step grid world tasks and manipulation tasks demonstrate significant performance gains over baselines. Then, this thesis develops a hierarchical reinforcement learning approach towards solving the long-horizon manipulation tasks. Specifically, the proposed universal option framework integrates the knowledge-sharing advantage of goal-conditioned reinforcement learning into hierarchical reinforcement learning. An analysis of the parallel training non-stationarity problem is also conducted, and the A2 method is employed to address the issue. Experiments in a series of continuous multi-step, multi-outcome block stacking tasks demonstrate significant performance gains as well as reductions of memory and repeated computation over baselines. Finally, this thesis studies the interplay between grasp generation and manipulation motion generation, arguing that selecting a good grasp before manipulation is essential for contact-rich manipulation tasks. A theory of general affordances based on the reinforcement learning paradigm is developed and used to represent the relationship between grasp generation and manipulation performances. This leads to the general affordance-aware manipulation framework, which selects task-agnostic grasps for downstream manipulation based on the predicted manipulation performances. Experiments on a series of contact-rich hook separation tasks prove the effectiveness of the proposed framework and showcase significant performance gains by filtering away unsatisfactory grasps

    Multi-sensor based segmentation of human manipulation tasks

    Get PDF
    Proceedings of: 2010 IEEE Conference on Multisensor Fusion and Integration for Intelligent Systems (FMI), September 5-7, 2010, Salt Lake City, USAIn this paper we present an overview of a multisensor setup designed to record and analyse human in-hand manipulation - tasks consisting of several phases of finger motions following the initial grasp. During the experiments all of the hand, finger, and object positions are recorded, as are the contact forces applied to the manipulated objects. The use of instrumented sensing objects complements the data. The goal is to understand and extract a basic set of finger and hand movement patterns, which can then be combined to perform a complete manipulation task, and which can be transferred to control robotic hands. The segmentation of whole manipulation traces into several phases corresponding to individual basic patterns is the first step towards this goal. Initial analysis and segmentation of two typical manipulation tasks are presented, showing the advantages of the multi-modal analysis.European Community's Seventh Framework ProgramThis work is partially supported by the European project HANDLE ICT-23641O, www.handleproject.eul
    • …
    corecore