74 research outputs found

    Conditional Task and Motion Planning through an Effort-based Approach

    Full text link
    This paper proposes a preliminary work on a Conditional Task and Motion Planning algorithm able to find a plan that minimizes robot efforts while solving assigned tasks. Unlike most of the existing approaches that replan a path only when it becomes unfeasible (e.g., no collision-free paths exist), the proposed algorithm takes into consideration a replanning procedure whenever an effort-saving is possible. The effort is here considered as the execution time, but it is extensible to the robot energy consumption. The computed plan is both conditional and dynamically adaptable to the unexpected environmental changes. Based on the theoretical analysis of the algorithm, authors expect their proposal to be complete and scalable. In progress experiments aim to prove this investigation

    Learning to Navigate Cloth using Haptics

    Full text link
    We present a controller that allows an arm-like manipulator to navigate deformable cloth garments in simulation through the use of haptic information. The main challenge of such a controller is to avoid getting tangled in, tearing or punching through the deforming cloth. Our controller aggregates force information from a number of haptic-sensing spheres all along the manipulator for guidance. Based on haptic forces, each individual sphere updates its target location, and the conflicts that arise between this set of desired positions is resolved by solving an inverse kinematic problem with constraints. Reinforcement learning is used to train the controller for a single haptic-sensing sphere, where a training run is terminated (and thus penalized) when large forces are detected due to contact between the sphere and a simplified model of the cloth. In simulation, we demonstrate successful navigation of a robotic arm through a variety of garments, including an isolated sleeve, a jacket, a shirt, and shorts. Our controller out-performs two baseline controllers: one without haptics and another that was trained based on large forces between the sphere and cloth, but without early termination.Comment: Supplementary video available at https://youtu.be/iHqwZPKVd4A. Related publications http://www.cc.gatech.edu/~karenliu/Robotic_dressing.htm

    More than a Million Ways to Be Pushed: A High-Fidelity Experimental Dataset of Planar Pushing

    Get PDF
    Pushing is a motion primitive useful to handle objects that are too large, too heavy, or too cluttered to be grasped. It is at the core of much of robotic manipulation, in particular when physical interaction is involved. It seems reasonable then to wish for robots to understand how pushed objects move. In reality, however, robots often rely on approximations which yield models that are computable, but also restricted and inaccurate. Just how close are those models? How reasonable are the assumptions they are based on? To help answer these questions, and to get a better experimental understanding of pushing, we present a comprehensive and high-fidelity dataset of planar pushing experiments. The dataset contains timestamped poses of a circular pusher and a pushed object, as well as forces at the interaction.We vary the push interaction in 6 dimensions: surface material, shape of the pushed object, contact position, pushing direction, pushing speed, and pushing acceleration. An industrial robot automates the data capturing along precisely controlled position-velocity-acceleration trajectories of the pusher, which give dense samples of positions and forces of uniform quality. We finish the paper by characterizing the variability of friction, and evaluating the most common assumptions and simplifications made by models of frictional pushing in robotics.Comment: 8 pages, 10 figure

    Learning to Singulate Objects using a Push Proposal Network

    Full text link
    Learning to act in unstructured environments, such as cluttered piles of objects, poses a substantial challenge for manipulation robots. We present a novel neural network-based approach that separates unknown objects in clutter by selecting favourable push actions. Our network is trained from data collected through autonomous interaction of a PR2 robot with randomly organized tabletop scenes. The model is designed to propose meaningful push actions based on over-segmented RGB-D images. We evaluate our approach by singulating up to 8 unknown objects in clutter. We demonstrate that our method enables the robot to perform the task with a high success rate and a low number of required push actions. Our results based on real-world experiments show that our network is able to generalize to novel objects of various sizes and shapes, as well as to arbitrary object configurations. Videos of our experiments can be viewed at http://robotpush.cs.uni-freiburg.deComment: International Symposium on Robotics Research (ISRR) 2017, videos: http://robotpush.cs.uni-freiburg.d

    Asymmetric Actor Critic for Image-Based Robot Learning

    Full text link
    Deep reinforcement learning (RL) has proven a powerful technique in many sequential decision making domains. However, Robotics poses many challenges for RL, most notably training on a physical system can be expensive and dangerous, which has sparked significant interest in learning control policies using a physics simulator. While several recent works have shown promising results in transferring policies trained in simulation to the real world, they often do not fully utilize the advantage of working with a simulator. In this work, we exploit the full state observability in the simulator to train better policies which take as input only partial observations (RGBD images). We do this by employing an actor-critic training algorithm in which the critic is trained on full states while the actor (or policy) gets rendered images as input. We show experimentally on a range of simulated tasks that using these asymmetric inputs significantly improves performance. Finally, we combine this method with domain randomization and show real robot experiments for several tasks like picking, pushing, and moving a block. We achieve this simulation to real world transfer without training on any real world data.Comment: Videos of experiments can be found at http://www.goo.gl/b57WT

    Planning clearing actions in cluttered scenes by phasing in geometrical constraints

    Get PDF
    Manipulation planning of cluttered objects involves a mixture of symbolic and geometric constraints which makes such planning very time consuming and often unsuitable for real applications. We propose to divide the geometric restrictions in two groups. The ones in the first group are used to generate a set of symbolic states used for planning. The evaluation of the ones in the second group is delayed after planning, and only relevant ones are evaluated when necessary. We demonstrate our proposal in a simple but effective implementation using pushing and grasping actions.Peer ReviewedPostprint (author's final draft
    corecore