528 research outputs found

    Fusing sonars and LRF data to perform SLAM in reduced visibility scenarios

    Get PDF
    Simultaneous Localization and Mapping (SLAM) approaches have evolved considerably in recent years. However, there are many situations which are not easily handled, such as the case of smoky, dusty, or foggy environments where commonly used range sensors for SLAM are highly disturbed by noise induced in the measurement process by particles of smoke, dust or steam. This work presents a sensor fusion method for range sensing in Simultaneous Localization and Mapping (SLAM) under reduced visibility conditions. The proposed method uses the complementary characteristics between a Laser Range Finder (LRF) and an array of sonars in order to ultimately map smoky environments. The method was validated through experiments in a smoky indoor scenario, and results showed that it is able to adequately cope with induced disturbances, thus decreasing the impact of smoke particles in the mapping task

    A review of sensor technology and sensor fusion methods for map-based localization of service robot

    Get PDF
    Service robot is currently gaining traction, particularly in hospitality, geriatric care and healthcare industries. The navigation of service robots requires high adaptability, flexibility and reliability. Hence, map-based navigation is suitable for service robot because of the ease in updating changes in environment and the flexibility in determining a new optimal path. For map-based navigation to be robust, an accurate and precise localization method is necessary. Localization problem can be defined as recognizing the robot’s own position in a given environment and is a crucial step in any navigational process. Major difficulties of localization include dynamic changes of the real world, uncertainties and limited sensor information. This paper presents a comparative review of sensor technology and sensor fusion methods suitable for map-based localization, focusing on service robot applications

    Towards autonomous localization and mapping of AUVs: a survey

    Get PDF
    Purpose The main purpose of this paper is to investigate two key elements of localization and mapping of Autonomous Underwater Vehicle (AUV), i.e. to overview various sensors and algorithms used for underwater localization and mapping, and to make suggestions for future research. Design/methodology/approach The authors first review various sensors and algorithms used for AUVs in the terms of basic working principle, characters, their advantages and disadvantages. The statistical analysis is carried out by studying 35 AUV platforms according to the application circumstances of sensors and algorithms. Findings As real-world applications have different requirements and specifications, it is necessary to select the most appropriate one by balancing various factors such as accuracy, cost, size, etc. Although highly accurate localization and mapping in an underwater environment is very difficult, more and more accurate and robust navigation solutions will be achieved with the development of both sensors and algorithms. Research limitations/implications This paper provides an overview of the state of art underwater localisation and mapping algorithms and systems. No experiments are conducted for verification. Practical implications The paper will give readers a clear guideline to find suitable underwater localisation and mapping algorithms and systems for their practical applications in hand. Social implications There is a wide range of audiences who will benefit from reading this comprehensive survey of autonomous localisation and mapping of UAVs. Originality/value The paper will provide useful information and suggestions to research students, engineers and scientists who work in the field of autonomous underwater vehicles

    A sensor fusion layer to cope with reduced visibility in SLAM

    Get PDF
    Mapping and navigating with mobile robots in scenarios with reduced visibility, e.g. due to smoke, dust, or fog, is still a big challenge nowadays. In spite of the tremendous advance on Simultaneous Localization and Mapping (SLAM) techniques for the past decade, most of current algorithms fail in those environments because they usually rely on optical sensors providing dense range data, e.g. laser range finders, stereo vision, LIDARs, RGB-D, etc., whose measurement process is highly disturbed by particles of smoke, dust, or steam. This article addresses the problem of performing SLAM under reduced visibility conditions by proposing a sensor fusion layer which takes advantage from complementary characteristics between a laser range finder (LRF) and an array of sonars. This sensor fusion layer is ultimately used with a state-of-the-art SLAM technique to be resilient in scenarios where visibility cannot be assumed at all times. Special attention is given to mapping using commercial off-the-shelf (COTS) sensors, namely arrays of sonars which, being usually available in robotic platforms, raise technical issues that were investigated in the course of this work. Two sensor fusion methods, a heuristic method and a fuzzy logic-based method, are presented and discussed, corresponding to different stages of the research work conducted. The experimental validation of both methods with two different mobile robot platforms in smoky indoor scenarios showed that they provide a robust solution, using only COTS sensors, for adequately coping with reduced visibility in the SLAM process, thus decreasing significantly its impact in the mapping and localization results obtained

    A sensor fusion layer to cope with reduced visibility in SLAM

    Get PDF
    Mapping and navigating with mobile robots in scenarios with reduced visibility, e.g. due to smoke, dust, or fog, is still a big challenge nowadays. In spite of the tremendous advance on Simultaneous Localization and Mapping (SLAM) techniques for the past decade, most of current algorithms fail in those environments because they usually rely on optical sensors providing dense range data, e.g. laser range finders, stereo vision, LIDARs, RGB-D, etc., whose measurement process is highly disturbed by particles of smoke, dust, or steam. This article addresses the problem of performing SLAM under reduced visibility conditions by proposing a sensor fusion layer which takes advantage from complementary characteristics between a laser range finder (LRF) and an array of sonars. This sensor fusion layer is ultimately used with a state-of-the-art SLAM technique to be resilient in scenarios where visibility cannot be assumed at all times. Special attention is given to mapping using commercial off-the-shelf (COTS) sensors, namely arrays of sonars which, being usually available in robotic platforms, raise technical issues that were investigated in the course of this work. Two sensor fusion methods, a heuristic method and a fuzzy logic-based method, are presented and discussed, corresponding to different stages of the research work conducted. The experimental validation of both methods with two different mobile robot platforms in smoky indoor scenarios showed that they provide a robust solution, using only COTS sensors, for adequately coping with reduced visibility in the SLAM process, thus decreasing significantly its impact in the mapping and localization results obtained

    Online Mapping-Based Navigation System for Wheeled Mobile Robot in Road Following and Roundabout

    Get PDF
    A road mapping and feature extraction for mobile robot navigation in road roundabout and road following environments is presented in this chapter. In this work, the online mapping of mobile robot employing the utilization of sensor fusion technique is used to extract the road characteristics that will be used with path planning algorithm to enable the robot to move from a certain start position to predetermined goal, such as road curbs, road borders, and roundabout. The sensor fusion is performed using many sensors, namely, laser range finder, camera, and odometry, which are combined on a new wheeled mobile robot prototype to determine the best optimum path of the robot and localize it within its environments. The local maps are developed using an image’s preprocessing and processing algorithms and an artificial threshold of LRF signal processing to recognize the road environment parameters such as road curbs, width, and roundabout. The path planning in the road environments is accomplished using a novel approach so called Laser Simulator to find the trajectory in the local maps developed by sensor fusion. Results show the capability of the wheeled mobile robot to effectively recognize the road environments, build a local mapping, and find the path in both road following and roundabout
    corecore