15 research outputs found

    Probabilistic Combination of Noisy Points and Planes for RGB-D Odometry

    Full text link
    This work proposes a visual odometry method that combines points and plane primitives, extracted from a noisy depth camera. Depth measurement uncertainty is modelled and propagated through the extraction of geometric primitives to the frame-to-frame motion estimation, where pose is optimized by weighting the residuals of 3D point and planes matches, according to their uncertainties. Results on an RGB-D dataset show that the combination of points and planes, through the proposed method, is able to perform well in poorly textured environments, where point-based odometry is bound to fail.Comment: Accepted to TAROS 201

    Probabilistic RGB-D Odometry based on Points, Lines and Planes Under Depth Uncertainty

    Full text link
    This work proposes a robust visual odometry method for structured environments that combines point features with line and plane segments, extracted through an RGB-D camera. Noisy depth maps are processed by a probabilistic depth fusion framework based on Mixtures of Gaussians to denoise and derive the depth uncertainty, which is then propagated throughout the visual odometry pipeline. Probabilistic 3D plane and line fitting solutions are used to model the uncertainties of the feature parameters and pose is estimated by combining the three types of primitives based on their uncertainties. Performance evaluation on RGB-D sequences collected in this work and two public RGB-D datasets: TUM and ICL-NUIM show the benefit of using the proposed depth fusion framework and combining the three feature-types, particularly in scenes with low-textured surfaces, dynamic objects and missing depth measurements.Comment: Major update: more results, depth filter released as opensource, 34 page

    Depth-Camera-Aided Inertial Navigation Utilizing Directional Constraints.

    Full text link
    This paper presents a practical yet effective solution for integrating an RGB-D camera and an inertial sensor to handle the depth dropouts that frequently happen in outdoor environments, due to the short detection range and sunlight interference. In depth drop conditions, only the partial 5-degrees-of-freedom pose information (attitude and position with an unknown scale) is available from the RGB-D sensor. To enable continuous fusion with the inertial solutions, the scale ambiguous position is cast into a directional constraint of the vehicle motion, which is, in essence, an epipolar constraint in multi-view geometry. Unlike other visual navigation approaches, this can effectively reduce the drift in the inertial solutions without delay or under small parallax motion. If a depth image is available, a window-based feature map is maintained to compute the RGB-D odometry, which is then fused with inertial outputs in an extended Kalman filter framework. Flight results from the indoor and outdoor environments, as well as public datasets, demonstrate the improved navigation performance of the proposed approach

    DELIBOT WITH SLAM IMPLEMENTATION

    Get PDF
    This paper describes and discusses a research work on "DeliBOT – A Mobile Robot with Implementation of SLAM utilizing Computer Vision/Machine Learning Techniques". The principle objective is to study about the utilization of Kinect in mobile robotics and use it to assemble an integrated system framework equipped for building a map of environment, and localizing mobile robot with respect to the map using visual cues. There were four principle work stages. The initial step was studying and testing solutions for mapping and navigation with a RGB-D sensor, the Kinect. The accompanying stage was implementing a system framework equipped for identifying and localizing objects from the point cloud given by the Kinect, permitting the execution of further errands on the system framework, i.e. considering the computational load. The third step was identifying the landmarks and the improvement they can present in the framework. At last, the joining of the previous modules was led and experimental evaluation and validation of the integrated system. The demand of substitution of human by a robot is winding up noticeably more probable eager these days because of the likelihood of less mistakes that the robot apparently makes. Amid the previous couple of years, the technology turn out to be more accurate and legitimate outcomes with less errors, and researches started to consolidate more sensors. By utilizing accessible sensors, robot will perceive and identify environment it is in and makes map. Additionally, robot will have element of itself locating inside environment. Robot fundamental operations are identification of objects and localization for conduction of the services. Robot conduct appropriate path planning and avoidance of object by setting a target or determining goal [1]. Because of the outstanding research and robotics applications in almost every segments of life of human's, from space surveillance to health-care, solution is created for autonomous mobile robots direct tasks excluding human intervention in indoor environment [2], a few applications like cleaning facilities and transportation fields. Robot navigation in environment that is safe that performs profoundly, require environment map. Since in the greater part of applications in real-life map is not given, exploration algorithm is used

    Robots that can adapt like animals

    Get PDF
    As robots leave the controlled environments of factories to autonomously function in more complex, natural environments, they will have to respond to the inevitable fact that they will become damaged. However, while animals can quickly adapt to a wide variety of injuries, current robots cannot "think outside the box" to find a compensatory behavior when damaged: they are limited to their pre-specified self-sensing abilities, can diagnose only anticipated failure modes, and require a pre-programmed contingency plan for every type of potential damage, an impracticality for complex robots. Here we introduce an intelligent trial and error algorithm that allows robots to adapt to damage in less than two minutes, without requiring self-diagnosis or pre-specified contingency plans. Before deployment, a robot exploits a novel algorithm to create a detailed map of the space of high-performing behaviors: This map represents the robot's intuitions about what behaviors it can perform and their value. If the robot is damaged, it uses these intuitions to guide a trial-and-error learning algorithm that conducts intelligent experiments to rapidly discover a compensatory behavior that works in spite of the damage. Experiments reveal successful adaptations for a legged robot injured in five different ways, including damaged, broken, and missing legs, and for a robotic arm with joints broken in 14 different ways. This new technique will enable more robust, effective, autonomous robots, and suggests principles that animals may use to adapt to injury
    corecore