34 research outputs found

    Deep Network Uncertainty Maps for Indoor Navigation

    Full text link
    Most mobile robots for indoor use rely on 2D laser scanners for localization, mapping and navigation. These sensors, however, cannot detect transparent surfaces or measure the full occupancy of complex objects such as tables. Deep Neural Networks have recently been proposed to overcome this limitation by learning to estimate object occupancy. These estimates are nevertheless subject to uncertainty, making the evaluation of their confidence an important issue for these measures to be useful for autonomous navigation and mapping. In this work we approach the problem from two sides. First we discuss uncertainty estimation in deep models, proposing a solution based on a fully convolutional neural network. The proposed architecture is not restricted by the assumption that the uncertainty follows a Gaussian model, as in the case of many popular solutions for deep model uncertainty estimation, such as Monte-Carlo Dropout. We present results showing that uncertainty over obstacle distances is actually better modeled with a Laplace distribution. Then, we propose a novel approach to build maps based on Deep Neural Network uncertainty models. In particular, we present an algorithm to build a map that includes information over obstacle distance estimates while taking into account the level of uncertainty in each estimate. We show how the constructed map can be used to increase global navigation safety by planning trajectories which avoid areas of high uncertainty, enabling higher autonomy for mobile robots in indoor settings.Comment: Accepted for publication in "2019 IEEE-RAS International Conference on Humanoid Robots (Humanoids)

    Hallucinating robots: Inferring Obstacle Distances from Partial Laser Measurements

    Full text link
    Many mobile robots rely on 2D laser scanners for localization, mapping, and navigation. However, those sensors are unable to correctly provide distance to obstacles such as glass panels and tables whose actual occupancy is invisible at the height the sensor is measuring. In this work, instead of estimating the distance to obstacles from richer sensor readings such as 3D lasers or RGBD sensors, we present a method to estimate the distance directly from raw 2D laser data. To learn a mapping from raw 2D laser distances to obstacle distances we frame the problem as a learning task and train a neural network formed as an autoencoder. A novel configuration of network hyperparameters is proposed for the task at hand and is quantitatively validated on a test set. Finally, we qualitatively demonstrate in real time on a Care-O-bot 4 that the trained network can successfully infer obstacle distances from partial 2D laser readings.Comment: In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS

    Dynamiska rörelseprimitiver och förstÀrkande inlÀrning för att anpassa en lÀrd fÀrdighet

    Get PDF
    Traditionally robots have been preprogrammed to execute specific tasks. This approach works well in industrial settings where robots have to execute highly accurate movements, such as when welding. However, preprogramming a robot is also expensive, error prone and time consuming due to the fact that every features of the task has to be considered. In some cases, where a robot has to execute complex tasks such as playing the ball-in-a-cup game, preprogramming it might even be impossible due to unknown features of the task. With all this in mind, this thesis examines the possibility of combining a modern learning framework, known as Learning from Demonstrations (LfD), to first teach a robot how to play the ball-in-a-cup game by demonstrating the movement for the robot, and then have the robot to improve this skill by itself with subsequent Reinforcement Learning (RL). The skill the robot has to learn is demonstrated with kinesthetic teaching, modelled as a dynamic movement primitive, and subsequently improved with the RL algorithm Policy Learning by Weighted Exploration with the Returns. Experiments performed on the industrial robot KUKA LWR4+ showed that robots are capable of successfully learning a complex skill such as playing the ball-in-a-cup game.Traditionellt sett har robotar blivit förprogrammerade för att utföra specifika uppgifter. Detta tillvÀgagÄngssÀtt fungerar bra i industriella miljöer var robotar mÄste utföra mycket noggranna rörelser, som att svetsa. Förprogrammering av robotar Àr dock dyrt, felbenÀget och tidskrÀvande eftersom varje aspekt av uppgiften mÄste beaktas. Dessa nackdelar kan till och med göra det omöjligt att förprogrammera en robot att utföra komplexa uppgifter som att spela bollen-i-koppen spelet. Med allt detta i Ätanke undersöker den hÀr avhandlingen möjligheten att kombinera ett modernt ramverktyg, kallat inlÀrning av demonstrationer, för att lÀra en robot hur bollen-i-koppen-spelet ska spelas genom att demonstrera uppgiften för den och sedan ha roboten att sjÀlv förbÀttra sin inlÀrda uppgift genom att anvÀnda förstÀrkande inlÀrning. Uppgiften som roboten mÄste lÀra sig Àr demonstrerad med kinestetisk undervisning, modellerad som dynamiska rörelseprimitiver, och senare förbÀttrad med den förstÀrkande inlÀrningsalgoritmen Policy Learning by Weighted Exploration with the Returns. Experiment utförda pÄ den industriella KUKA LWR4+ roboten visade att robotar Àr kapabla att framgÄngsrikt lÀra sig spela bollen-i-koppen spelet

    Safe Grasping with a Force Controlled Soft Robotic Hand

    Full text link
    Safe yet stable grasping requires a robotic hand to apply sufficient force on the object to immobilize it while keeping it from getting damaged. Soft robotic hands have been proposed for safe grasping due to their passive compliance, but even such a hand can crush objects if the applied force is too high. Thus for safe grasping, regulating the grasping force is of uttermost importance even with soft hands. In this work, we present a force controlled soft hand and use it to achieve safe grasping. To this end, resistive force and bend sensors are integrated in a soft hand, and a data-driven calibration method is proposed to estimate contact interaction forces. Given the force readings, the pneumatic pressures are regulated using a proportional-integral controller to achieve desired force. The controller is experimentally evaluated and benchmarked by grasping easily deformable objects such as plastic and paper cups without neither dropping nor deforming them. Together, the results demonstrate that our force controlled soft hand can grasp deformable objects in a safe yet stable manner.Comment: Accepted to 2020 IEEE International Conference on Systems, Man, and Cybernetics (IEEE SMC 2020

    CAPGrasp: An R3×SO(2)-equivariant\mathbb{R}^3\times \text{SO(2)-equivariant} Continuous Approach-Constrained Generative Grasp Sampler

    Full text link
    We propose CAPGrasp, an R3×SO(2)-equivariant\mathbb{R}^3\times \text{SO(2)-equivariant} 6-DoF continuous approach-constrained generative grasp sampler. It includes a novel learning strategy for training CAPGrasp that eliminates the need to curate massive conditionally labeled datasets and a constrained grasp refinement technique that improves grasp poses while respecting the grasp approach directional constraints. The experimental results demonstrate that CAPGrasp is more than three times as sample efficient as unconstrained grasp samplers while achieving up to 38% grasp success rate improvement. CAPGrasp also achieves 4-10% higher grasp success rates than constrained but noncontinuous grasp samplers. Overall, CAPGrasp is a sample-efficient solution when grasps must originate from specific directions, such as grasping in confined spaces.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    GoNet: An Approach-Constrained Generative Grasp Sampling Network

    Full text link
    Constraining the approach direction of grasps is important when picking objects in confined spaces, such as when emptying a shelf. Yet, such capabilities are not available in state-of-the-art data-driven grasp sampling methods that sample grasps all around the object. In this work, we address the specific problem of training approach-constrained data-driven grasp samplers and how to generate good grasping directions automatically. Our solution is GoNet: a generative grasp sampler that can constrain the grasp approach direction to lie close to a specified direction. This is achieved by discretizing SO(3) into bins and training GoNet to generate grasps from those bins. At run-time, the bin aligning with the second largest principal component of the observed point cloud is selected. GoNet is benchmarked against GraspNet, a state-of-the-art unconstrained grasp sampler, in an unconfined grasping experiment in simulation and on an unconfined and confined grasping experiment in the real world. The results demonstrate that GoNet achieves higher success-over-coverage in simulation and a 12%-18% higher success rate in real-world table-picking and shelf-picking tasks than the baseline.Comment: IROS 2023 submissio

    Impaired transmission in the corticospinal tract and gait disability in spinal cord injured persons

    Get PDF
    Rehabilitation following spinal cord injury is likely to depend on recovery of corticospinal systems. Here we investigate whether transmission in the corticospinal tract may explain foot drop (inability to dorsiflex ankle) in persons with spinal cord lesion. The study was performed in 24 persons with incomplete spinal cord lesion (C1 to L1) and 15 healthy controls. Coherence in the 10- to 20-Hz frequency band between paired tibialis anterior muscle (TA) electromyographic recordings obtained in the swing phase of walking, which was taken as a measure of motor unit synchronization. It was significantly correlated with the degree of foot drop, as measured by toe elevation and ankle angle excursion in the first part of swing. Transcranial magnetic stimulation was used to elicit motor-evoked potentials (MEPs) in the TA. The amplitude of the MEPs at rest and their latency during contraction were correlated to the degree of foot drop. Spinal cord injured participants who exhibited a large foot drop had little or no MEP at rest in the TA muscle and had little or no coherence in the same muscle during walking. Gait speed was correlated to foot drop, and was the lowest in participants with no MEP at rest. The data confirm that transmission in the corticospinal tract is of importance for lifting the foot during the swing phase of human gait

    Enabling Robot Manipulation of Soft and Rigid Objects with Vision-based Tactile Sensors

    Full text link
    Endowing robots with tactile capabilities opens up new possibilities for their interaction with the environment, including the ability to handle fragile and/or soft objects. In this work, we equip the robot gripper with low-cost vision-based tactile sensors and propose a manipulation algorithm that adapts to both rigid and soft objects without requiring any knowledge of their properties. The algorithm relies on a touch and slip detection method, which considers the variation in the tactile images with respect to reference ones. We validate the approach on seven different objects, with different properties in terms of rigidity and fragility, to perform unplugging and lifting tasks. Furthermore, to enhance applicability, we combine the manipulation algorithm with a grasp sampler for the task of finding and picking a grape from a bunch without damaging~it.Comment: Published in IEEE International Conference on Automation Science and Engineering (CASE2023

    Constrained Generative Sampling of 6-DoF Grasps

    Full text link
    Most state-of-the-art data-driven grasp sampling methods propose stable and collision-free grasps uniformly on the target object. For bin-picking, executing any of those reachable grasps is sufficient. However, for completing specific tasks, such as squeezing out liquid from a bottle, we want the grasp to be on a specific part of the object's body while avoiding other locations, such as the cap. This work presents a generative grasp sampling network, VCGS, capable of constrained 6 Degrees of Freedom (DoF) grasp sampling. In addition, we also curate a new dataset designed to train and evaluate methods for constrained grasping. The new dataset, called CONG, consists of over 14 million training samples of synthetically rendered point clouds and grasps at random target areas on 2889 objects. VCGS is benchmarked against GraspNet, a state-of-the-art unconstrained grasp sampler, in simulation and on a real robot. The results demonstrate that VCGS achieves a 10-15% higher grasp success rate than the baseline while being 2-3 times as sample efficient. Supplementary material is available on our project website.Comment: Accepted at the International Conference on Intelligent Robots and Systems (IROS 2023
    corecore