624 research outputs found

    Design and development of an autonomous duct inspection and mapping robot

    Get PDF
    Just a few years ago, the idea of having robots in factories and households was science fiction. But, as robotic technology develops, this is becoming reality. Nowadays, robots not only perform simple household chores, but are used in most production lines and are even employed by the army. Visual inspection robots are very common and are used in many industries, including inspecting the interior of duct systems. Duct systems are in place in almost all large buildings and require ongoing maintenance and cleaning. Systems that are not properly maintained can pose a health risk as dust and mold form and are then blown throughout the building. In some cases, access holes have to be cut to allow access for inspection to occur. A robotic system, small enough to enter a duct through any existing access panel, would be advantageous. An autonomous robot would be even more useful as no operator would be needed thus reducing operating costs. To this end, a robot was developed that could autonomously navigate through a duct system, recoding video images and mapping the internal profile. The development of which is discussed in this thesis, included the design of the robotic platform, the inclusion of appropriate sensors and accompanying circuitry, generation of a simulation to test the control algorithm and implementing embedded software to control the robot. From the testing of the entire system the following conclusions were drawn. The robot as a whole performed well and navigated autonomously through the duct with a success rate of 90%. The system tests were repeatable and the odometry data closely matched the actual paths for straight line travel. The sonar data closely corresponded to the duct walls but was hard to interpret when the odometry and actual paths diverged. These paths diverged from each other due to wheel slip caused as the robot turned. The simulation developed showed that the control algorithm would ensure that the robot recursively inspected any duct system and provided information about the system as a whole. Further work should concentrate on improving the correlation between the odometry path and the actual path, perhaps by adding in a bearing measurement system. Sensors with greater range and accuracy should be implemented and the entire system re-tested. The embedded controller allowed for expansion should additional requirements be needed and was more then adequate for the task

    A hybrid approach to simultaneous localization and mapping in indoors environment

    Get PDF
    This thesis will present SLAM in the current literature to benefit from then it will present the investigation results for a hybrid approach used where different algorithms using laser, sonar, and camera sensors were tested and compared. The contribution of this thesis is the development of a hybrid approach for SLAM that uses different sensors and where different factors are taken into consideration such as dynamic objects, and the development of a scalable grid map model with new sensors models for real time update of the map.The thesis will show the success found, difficulties faced and limitations of the algorithms developed which were simulated and experimentally tested in an indoors environment

    A sensor fusion layer to cope with reduced visibility in SLAM

    Get PDF
    Mapping and navigating with mobile robots in scenarios with reduced visibility, e.g. due to smoke, dust, or fog, is still a big challenge nowadays. In spite of the tremendous advance on Simultaneous Localization and Mapping (SLAM) techniques for the past decade, most of current algorithms fail in those environments because they usually rely on optical sensors providing dense range data, e.g. laser range finders, stereo vision, LIDARs, RGB-D, etc., whose measurement process is highly disturbed by particles of smoke, dust, or steam. This article addresses the problem of performing SLAM under reduced visibility conditions by proposing a sensor fusion layer which takes advantage from complementary characteristics between a laser range finder (LRF) and an array of sonars. This sensor fusion layer is ultimately used with a state-of-the-art SLAM technique to be resilient in scenarios where visibility cannot be assumed at all times. Special attention is given to mapping using commercial off-the-shelf (COTS) sensors, namely arrays of sonars which, being usually available in robotic platforms, raise technical issues that were investigated in the course of this work. Two sensor fusion methods, a heuristic method and a fuzzy logic-based method, are presented and discussed, corresponding to different stages of the research work conducted. The experimental validation of both methods with two different mobile robot platforms in smoky indoor scenarios showed that they provide a robust solution, using only COTS sensors, for adequately coping with reduced visibility in the SLAM process, thus decreasing significantly its impact in the mapping and localization results obtained

    Collective cluster-based map merging in multi robot SLAM

    Get PDF
    New challenges arise with multi-robotics, while information integration is among the most important problems need to be solved in this field. For mobile robots, information integration usually refers to map merging . Map merging is the process of combining partial maps constructed by individual robots in order to build a global map of the environment. Different approaches have been made toward solving map merging problem. Our method is based on transformational approach, in which the idea is to find regions of overlap between local maps and fuse them together using a set of transformations and similarity heuristic algorithms. The contribution of this work is an improvement made in the search space of candidate transformations. This was achieved by enforcing pair-wise partial localization technique over the local maps prior to any attempt to transform them. The experimental results show a noticeable improvement (15-20%) made in the overall mapping time using our technique

    Robot Mapping and Navigation by Fusing Sensory Information

    Get PDF

    A sensor fusion layer to cope with reduced visibility in SLAM

    Get PDF
    Mapping and navigating with mobile robots in scenarios with reduced visibility, e.g. due to smoke, dust, or fog, is still a big challenge nowadays. In spite of the tremendous advance on Simultaneous Localization and Mapping (SLAM) techniques for the past decade, most of current algorithms fail in those environments because they usually rely on optical sensors providing dense range data, e.g. laser range finders, stereo vision, LIDARs, RGB-D, etc., whose measurement process is highly disturbed by particles of smoke, dust, or steam. This article addresses the problem of performing SLAM under reduced visibility conditions by proposing a sensor fusion layer which takes advantage from complementary characteristics between a laser range finder (LRF) and an array of sonars. This sensor fusion layer is ultimately used with a state-of-the-art SLAM technique to be resilient in scenarios where visibility cannot be assumed at all times. Special attention is given to mapping using commercial off-the-shelf (COTS) sensors, namely arrays of sonars which, being usually available in robotic platforms, raise technical issues that were investigated in the course of this work. Two sensor fusion methods, a heuristic method and a fuzzy logic-based method, are presented and discussed, corresponding to different stages of the research work conducted. The experimental validation of both methods with two different mobile robot platforms in smoky indoor scenarios showed that they provide a robust solution, using only COTS sensors, for adequately coping with reduced visibility in the SLAM process, thus decreasing significantly its impact in the mapping and localization results obtained

    Simultaneous Localization and Mapping Technologies

    Get PDF
    Il problema dello SLAM (Simultaneous Localization And Mapping) consiste nel mappare un ambiente sconosciuto per mezzo di un dispositivo che si muove al suo interno, mentre si effettua la localizzazione di quest'ultimo. All'interno di questa tesi viene analizzato il problema dello SLAM e le differenze che lo contraddistinguono dai problemi di mapping e di localizzazione trattati separatamente. In seguito, si effettua una analisi dei principali algoritmi impiegati al giorno d'oggi per la sua risoluzione, ovvero i filtri estesi di Kalman e i particle filter. Si analizzano poi le diverse tecnologie implementative esistenti, tra le quali figurano sistemi SONAR, sistemi LASER, sistemi di visione e sistemi RADAR; questi ultimi, allo stato dell'arte, impiegano onde millimetriche (mmW) e a banda larga (UWB), ma anche tecnologie radio già affermate, fra le quali il Wi-Fi. Infine, vengono effettuate delle simulazioni di tecnologie basate su sistema di visione e su sistema LASER, con l'ausilio di due pacchetti open source di MATLAB. Successivamente, il pacchetto progettato per sistemi LASER è stato modificato al fine di simulare una tecnologia SLAM basata su segnali Wi-Fi. L'utilizzo di tecnologie a basso costo e ampiamente diffuse come il Wi-Fi apre alla possibilità, in un prossimo futuro, di effettuare localizzazione indoor a basso costo, sfruttando l'infrastruttura esistente, mediante un semplice smartphone. Più in prospettiva, l'avvento della tecnologia ad onde millimetriche (5G) consentirà di raggiungere prestazioni maggiori
    corecore