26 research outputs found

    AO-Grasp: Articulated Object Grasp Generation

    Full text link
    We introduce AO-Grasp, a grasp proposal method that generates stable and actionable 6 degree-of-freedom grasps for articulated objects. Our generated grasps enable robots to interact with articulated objects, such as opening and closing cabinets and appliances. Given a segmented partial point cloud of a single articulated object, AO-Grasp predicts the best grasp points on the object with a novel Actionable Grasp Point Predictor model and then finds corresponding grasp orientations for each point by leveraging a state-of-the-art rigid object grasping method. We train AO-Grasp on our new AO-Grasp Dataset, which contains 48K actionable parallel-jaw grasps on synthetic articulated objects. In simulation, AO-Grasp achieves higher grasp success rates than existing rigid object grasping and articulated object interaction baselines on both train and test categories. Additionally, we evaluate AO-Grasp on 120 realworld scenes of objects with varied geometries, articulation axes, and joint states, where AO-Grasp produces successful grasps on 67.5% of scenes, while the baseline only produces successful grasps on 33.3% of scenes.Comment: Project website: https://stanford-iprl-lab.github.io/ao-gras

    Effects of obstacle avoidance to LQG-based motion planners

    Get PDF

    Learning deep policies for physics-based robotic manipulation in cluttered real-world environments

    Get PDF
    This thesis presents a series of planners and learning algorithms for real-world manipulation in clutter. The focus is on interleaving real-world execution with look-ahead planning in simulation as an effective way to address the uncertainty arising from complex physics interactions and occlusions. We introduce VisualRHP, a receding horizon planner in the image space guided by a learned heuristic. VisualRHP generates, in closed-loop, prehensile and non-prehensile manipulation actions to manipulate a desired object in clutter while avoiding dropping obstacle objects off the edge of the manipulation surface. To acquire the heuristic of VisualRHP, we develop deep imitation learning and deep reinforcement learning algorithms specifically tailored for environments with complex dynamics and requiring long-term sequential decision making. The learned heuristic ensures generalization over different environment settings and transferability of manipulation skills to different desired objects in the real world. In the second part of this thesis, we integrate VisualRHP with a learnable object pose estimator to guide the search for an occluded desired object. This hybrid approach harnesses neural networks with convolution and recurrent structures to capture relevant information from the history of partial observation to guide VisualRHP future actions. We run an ablation study over the different component of VisualRHP and compare it with model-free and model-based alternatives. We run experiments in different simulation environments and real-world settings. The results show that by trading a small computation time for heuristic-guided look-ahead planning, VisualRHP delivers a more robust and efficient behaviour compared to alternative state-of-the-art approaches while still operating in near real-time

    Tactile Perception And Visuotactile Integration For Robotic Exploration

    Get PDF
    As the close perceptual sibling of vision, the sense of touch has historically received less than deserved attention in both human psychology and robotics. In robotics, this may be attributed to at least two reasons. First, it suffers from the vicious cycle of immature sensor technology, which causes industry demand to be low, and then there is even less incentive to make existing sensors in research labs easy to manufacture and marketable. Second, the situation stems from a fear of making contact with the environment, avoided in every way so that visually perceived states do not change before a carefully estimated and ballistically executed physical interaction. Fortunately, the latter viewpoint is starting to change. Work in interactive perception and contact-rich manipulation are on the rise. Good reasons are steering the manipulation and locomotion communities’ attention towards deliberate physical interaction with the environment prior to, during, and after a task. We approach the problem of perception prior to manipulation, using the sense of touch, for the purpose of understanding the surroundings of an autonomous robot. The overwhelming majority of work in perception for manipulation is based on vision. While vision is a fast and global modality, it is insufficient as the sole modality, especially in environments where the ambient light or the objects therein do not lend themselves to vision, such as in darkness, smoky or dusty rooms in search and rescue, underwater, transparent and reflective objects, and retrieving items inside a bag. Even in normal lighting conditions, during a manipulation task, the target object and fingers are usually occluded from view by the gripper. Moreover, vision-based grasp planners, typically trained in simulation, often make errors that cannot be foreseen until contact. As a step towards addressing these problems, we present first a global shape-based feature descriptor for object recognition using non-prehensile tactile probing alone. Then, we investigate in making the tactile modality, local and slow by nature, more efficient for the task by predicting the most cost-effective moves using active exploration. To combine the local and physical advantages of touch and the fast and global advantages of vision, we propose and evaluate a learning-based method for visuotactile integration for grasping
    corecore