128 research outputs found

    A sensor fusion layer to cope with reduced visibility in SLAM

    Get PDF
    Mapping and navigating with mobile robots in scenarios with reduced visibility, e.g. due to smoke, dust, or fog, is still a big challenge nowadays. In spite of the tremendous advance on Simultaneous Localization and Mapping (SLAM) techniques for the past decade, most of current algorithms fail in those environments because they usually rely on optical sensors providing dense range data, e.g. laser range finders, stereo vision, LIDARs, RGB-D, etc., whose measurement process is highly disturbed by particles of smoke, dust, or steam. This article addresses the problem of performing SLAM under reduced visibility conditions by proposing a sensor fusion layer which takes advantage from complementary characteristics between a laser range finder (LRF) and an array of sonars. This sensor fusion layer is ultimately used with a state-of-the-art SLAM technique to be resilient in scenarios where visibility cannot be assumed at all times. Special attention is given to mapping using commercial off-the-shelf (COTS) sensors, namely arrays of sonars which, being usually available in robotic platforms, raise technical issues that were investigated in the course of this work. Two sensor fusion methods, a heuristic method and a fuzzy logic-based method, are presented and discussed, corresponding to different stages of the research work conducted. The experimental validation of both methods with two different mobile robot platforms in smoky indoor scenarios showed that they provide a robust solution, using only COTS sensors, for adequately coping with reduced visibility in the SLAM process, thus decreasing significantly its impact in the mapping and localization results obtained

    A sensor fusion layer to cope with reduced visibility in SLAM

    Get PDF
    Mapping and navigating with mobile robots in scenarios with reduced visibility, e.g. due to smoke, dust, or fog, is still a big challenge nowadays. In spite of the tremendous advance on Simultaneous Localization and Mapping (SLAM) techniques for the past decade, most of current algorithms fail in those environments because they usually rely on optical sensors providing dense range data, e.g. laser range finders, stereo vision, LIDARs, RGB-D, etc., whose measurement process is highly disturbed by particles of smoke, dust, or steam. This article addresses the problem of performing SLAM under reduced visibility conditions by proposing a sensor fusion layer which takes advantage from complementary characteristics between a laser range finder (LRF) and an array of sonars. This sensor fusion layer is ultimately used with a state-of-the-art SLAM technique to be resilient in scenarios where visibility cannot be assumed at all times. Special attention is given to mapping using commercial off-the-shelf (COTS) sensors, namely arrays of sonars which, being usually available in robotic platforms, raise technical issues that were investigated in the course of this work. Two sensor fusion methods, a heuristic method and a fuzzy logic-based method, are presented and discussed, corresponding to different stages of the research work conducted. The experimental validation of both methods with two different mobile robot platforms in smoky indoor scenarios showed that they provide a robust solution, using only COTS sensors, for adequately coping with reduced visibility in the SLAM process, thus decreasing significantly its impact in the mapping and localization results obtained

    Fast, Accurate Thin-Structure Obstacle Detection for Autonomous Mobile Robots

    Full text link
    Safety is paramount for mobile robotic platforms such as self-driving cars and unmanned aerial vehicles. This work is devoted to a task that is indispensable for safety yet was largely overlooked in the past -- detecting obstacles that are of very thin structures, such as wires, cables and tree branches. This is a challenging problem, as thin objects can be problematic for active sensors such as lidar and sonar and even for stereo cameras. In this work, we propose to use video sequences for thin obstacle detection. We represent obstacles with edges in the video frames, and reconstruct them in 3D using efficient edge-based visual odometry techniques. We provide both a monocular camera solution and a stereo camera solution. The former incorporates Inertial Measurement Unit (IMU) data to solve scale ambiguity, while the latter enjoys a novel, purely vision-based solution. Experiments demonstrated that the proposed methods are fast and able to detect thin obstacles robustly and accurately under various conditions.Comment: Appeared at IEEE CVPR 2017 Workshop on Embedded Visio

    Toward autonomous underwater mapping in partially structured 3D environments

    Get PDF
    Submitted in partial fulfillment of the requirements for the degree of Master of Science at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution February 2014Motivated by inspection of complex underwater environments, we have developed a system for multi-sensor SLAM utilizing both structured and unstructured environmental features. We present a system for deriving planar constraints from sonar data, and jointly optimizing the vehicle and plane positions as nodes in a factor graph. We also present a system for outlier rejection and smoothing of 3D sonar data, and for generating loop closure constraints based on the alignment of smoothed submaps. Our factor graph SLAM backend combines loop closure constraints from sonar data with detections of visual fiducial markers from camera imagery, and produces an online estimate of the full vehicle trajectory and landmark positions. We evaluate our technique on an inspection of a decomissioned aircraft carrier, as well as synthetic data and controlled indoor experiments, demonstrating improved trajectory estimates and reduced reprojection error in the final 3D map

    A hybrid approach to simultaneous localization and mapping in indoors environment

    Get PDF
    This thesis will present SLAM in the current literature to benefit from then it will present the investigation results for a hybrid approach used where different algorithms using laser, sonar, and camera sensors were tested and compared. The contribution of this thesis is the development of a hybrid approach for SLAM that uses different sensors and where different factors are taken into consideration such as dynamic objects, and the development of a scalable grid map model with new sensors models for real time update of the map.The thesis will show the success found, difficulties faced and limitations of the algorithms developed which were simulated and experimentally tested in an indoors environment

    Toward lifelong visual localization and mapping

    Get PDF
    Thesis (Ph.D.)--Joint Program in Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science; and the Woods Hole Oceanographic Institution), 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 171-181).Mobile robotic systems operating over long durations require algorithms that are robust and scale efficiently over time as sensor information is continually collected. For mobile robots one of the fundamental problems is navigation; which requires the robot to have a map of its environment, so it can plan its path and execute it. Having the robot use its perception sensors to do simultaneous localization and mapping (SLAM) is beneficial for a fully autonomous system. Extending the time horizon of operations poses problems to current SLAM algorithms, both in terms of robustness and temporal scalability. To address this problem we propose a reduced pose graph model that significantly reduces the complexity of the full pose graph model. Additionally we develop a SLAM system using two different sensor modalities: imaging sonars for underwater navigation and vision based SLAM for terrestrial applications. Underwater navigation is one application domain that benefits from SLAM, where access to a global positioning system (GPS) is not possible. In this thesis we present SLAM systems for two underwater applications. First, we describe our implementation of real-time imaging-sonar aided navigation applied to in-situ autonomous ship hull inspection using the hovering autonomous underwater vehicle (HAUV). In addition we present an architecture that enables the fusion of information from both a sonar and a camera system. The system is evaluated using data collected during experiments on SS Curtiss and USCGC Seneca. Second, we develop a feature-based navigation system supporting multi-session mapping, and provide an algorithm for re-localizing the vehicle between missions. In addition we present a method for managing the complexity of the estimation problem as new information is received. The system is demonstrated using data collected with a REMUS vehicle equipped with a BlueView forward-looking sonar. The model we use for mapping builds on the pose graph representation which has been shown to be an efficient and accurate approach to SLAM. One of the problems with the pose graph formulation is that the state space continuously grows as more information is acquired. To address this problem we propose the reduced pose graph (RPG) model which partitions the space to be mapped and uses the partitions to reduce the number of poses used for estimation. To evaluate our approach, we present results using an online binocular and RGB-Depth visual SLAM system that uses place recognition both for robustness and multi-session operation. Additionally, to enable large-scale indoor mapping, our system automatically detects elevator rides based on accelerometer data. We demonstrate long-term mapping using approximately nine hours of data collected in the MIT Stata Center over the course of six months. Ground truth, derived by aligning laser scans to existing floor plans, is used to evaluate the global accuracy of the system. Our results illustrate the capability of our visual SLAM system to map a large scale environment over an extended period of time.by Hordur Johannsson.Ph.D

    Simultaneous localization and mapping with limited sensing using Extended Kalman Filter and Hough transform

    Get PDF
    Problem robota da izradi kartu nepoznatog okruženja uz ispravljanje vlastitog položaja na temelju iste karte i podataka senzora naziva se problem simultanog lokaliziranja i kartiranja (mapiranja). Budući da je točnost i preciznost senzora od velike važnosti u rješavanju tog problema, većina predloženih sustava uključuje primjenu skupih laserskih senzora daljine te relativno novije i jeftinije RGB-D kamere. Laserski senzori daljine su preskupi za neke primjene, a RGB-D kamere imaju veliku snagu, CPU ili sve što je potrebno za obradu podataka direktno ili na PC-u. Za izradu jeftinog robota bolje je primijeniti senzore niske cijene (poput infracrvenih ili sonarnih). Cilj je ovoga rada izraditi kartu nepoznatog okruženja uz primjenu jeftinog robota, produljenog Kalman filtra i linearnih obilježja, kao što su zidovi i namještaj. Ovdje se također predlaže pristup zatvaranja petlje. Eksperimenti su provedeni u okruženju Webots simulacije.The problem of a robot to create a map of an unknown environment while correcting its own position based on the same map and sensor data is called Simultaneous Localization and Mapping problem. As the accuracy and precision of the sensors have an important role in this problem, most of the proposed systems include the usage of high cost laser range sensors, and relatively newer and cheaper RGB-D cameras. Laser range sensors are too expensive for some implementations, and RGB-D cameras bring high power, CPU or communication requirements to process data on-board or on a PC. In order to build a low-cost robot it is more appropriate to use low-cost sensors (like infrared and sonar). In this study it is aimed to create a map of an unknown environment using a low cost robot, Extended Kalman Filter and linear features like walls and furniture. A loop closing approach is also proposed here. Experiments are performed in Webots simulation environment
    corecore