897 research outputs found

    Evolutionary Optimization Techniques for 3D Simultaneous Localization and Mapping

    Get PDF
    Mención Internacional en el título de doctorMobile robots are growing up in applications to move through indoors and outdoors environments, passing from teleoperated applications to autonomous applications like exploring or navigating. For a robot to move through a particular location, it needs to gather information about the scenario using sensors. These sensors allow the robot to observe, depending on the sensor data type. Cameras mostly give information in two dimensions, with colors and pixels representing an image. Range sensors give distances from the robot to obstacles. Depth Cameras mix both technologies to expand their information to three-dimensional information. Light Detection and Ranging (LiDAR) provides information about the distance to the sensor but expands its range to planes and three dimensions alongside precision. So, mobile robots use those sensors to scan the scenario while moving. If the robot already has a map, the sensors measure, and the robot finds features that correspond to features on the map to localize itself. Men have used Maps as a specialized form of representing the environment for more than 5000 years, becoming a piece of important information in today’s daily basics. Maps are used to navigate from one place to another, localize something inside some boundaries, or as a form of documentation of essential features. So naturally, an intuitive way of making an autonomous mobile robot is to implement geometrical information maps to represent the environment. On the other hand, if the robot does not have a previous map, it should build it while moving around. The robot computes the sensor information with the odometer sensor information to achieve this task. However, sensors have their own flaws due to precision, calibration, or accuracy. Furthermore, moving a robot has its physical constraints and faults that may occur randomly, like wheel drifting or mechanical miscalibration that may make the odometers fail in the measurement, causing misalignment during the map building. A novel technique was presented in the mid-90s to solve this problem and overpass the uncertainty of sensors while the robot is building the map, the Simultaneous Localization and Mapping algorithm (SLAM). Its goal is to build a map while the robot’s position is corrected based on the information of two or more consecutive scans matched together or find the rigid registration vector between them. This algorithm has been broadly studied and developed for almost 25 years. Nonetheless, it is highly relevant in innovations, modifications, and adaptations due to the advances in new sensors and the complexity of the scenarios in emerging mobile robotics applications. The scan matching algorithm aims to find a pose vector representing the transformation or movement between two robot observations by finding the best possible value after solving an equation representing a good transformation. It means searching for a solution in an optimum way. Typically this optimization process has been solved using classical optimization algorithms, like Newton’s algorithm or solving gradient and second derivatives formulations, yet this requires an initial guess or initial state that helps the algorithm point in the right direction, most of the time by getting this information from the odometers or inertial sensors. Although, it is not always possible to have or trust this information, as some scenarios are complex and reckon sensors fail. In order to solve this problem, this research presents the uses of evolutionary optimization algorithms, those with a meta-heuristics definition based on iterative evolution that mimics optimization processes that do not need previous information to search a limited range for solutions to solve a fitness function. The main goal of this dissertation is to study, develop and prove the benefits of evolutionary optimization algorithms in simultaneous localization and mapping for mobile robots in six degrees of freedom scenarios using LiDAR sensor information. This work introduces several evolutionary algorithms for scan matching, acknowledge a mixed fitness function for registration, solve simultaneous localization and matching in different scenarios, implements loop closure and error relaxation, and proves its performance at indoors, outdoors and underground mapping applications.Los robots móviles están creciendo en aplicaciones para moverse por entornos interiores y exteriores, pasando de aplicaciones teleoperadas a aplicaciones autónomas como explorar o navegar. Para que un robot se mueva a través de una ubicación en particular, necesita recopilar información sobre el escenario utilizando sensores. Estos sensores permiten que el robot observe, según el tipo de datos del sensor. Las cámaras en su mayoría brindan información en dos dimensiones, con colores y píxeles que representan una imagen. Los sensores de rango dan distancias desde el robot hasta los obstáculos. Las Cámaras de Profundidad mezclan ambas tecnologías para expandir su información a información tridimensional. Light Detection and Ranging (LiDAR) proporciona información sobre la distancia al sensor, pero amplía su rango a planos y tres dimensiones así como mejora la precisión. Por lo tanto, los robots móviles usan esos sensores para escanear el escenario mientras se mueven. Si el robot ya tiene un mapa, los sensores miden y el robot encuentra características que corresponden a características en dicho mapa para localizarse. La humanidad ha utilizado los mapas como una forma especializada de representar el medio ambiente durante más de 5000 años, convirtiéndose en una pieza de información importante en los usos básicos diarios de hoy en día. Los mapas se utilizan para navegar de un lugar a otro, localizar algo dentro de algunos límites o como una forma de documentación de características esenciales. Entonces, naturalmente, una forma intuitiva de hacer un robot móvil autónomo es implementar mapas de información geométrica para representar el entorno. Por otro lado, si el robot no tiene un mapa previo, deberá construirlo mientras se desplaza. El robot junta la información del sensor de distancias con la información del sensor del odómetro para lograr esta tarea de crear un mapa. Sin embargo, los sensores tienen sus propios defectos debido a la precisión, la calibración o la exactitud. Además, mover un robot tiene sus limitaciones físicas y fallas que pueden ocurrir aleatoriamente, como el desvío de las ruedas o una mala calibración mecánica que puede hacer que los contadores de desplazamiento fallen en la medición, lo que provoca una desalineación durante la construcción del mapa. A mediados de los años 90 se presentó una técnica novedosa para resolver este problema y superar la incertidumbre de los sensores mientras el robot construye el mapa, el algoritmo de localización y mapeo simultáneos (SLAM). Su objetivo es construir un mapa mientras se corrige la posición del robot en base a la información de dos o más escaneos consecutivos emparejados o encontrar el vector de correspondencia entre ellos. Este algoritmo ha sido ampliamente estudiado y desarrollado durante casi 25 años. No obstante, es muy relevante en innovaciones, modificaciones y adaptaciones debido a los avances en sensores y la complejidad de los escenarios en las aplicaciones emergentes de robótica móvil. El algoritmo de correspondencia de escaneo tiene como objetivo encontrar un vector de pose que represente la transformación o el movimiento entre dos observaciones del robot al encontrar el mejor valor posible después de resolver una ecuación que represente una buena transformación. Significa buscar una solución de forma óptima. Por lo general, este proceso de optimización se ha resuelto utilizando algoritmos de optimización clásicos, como el algoritmo de Newton o la resolución de formulaciones de gradientes y segundas derivadas, pero esto requiere una conjetura inicial o un estado inicial que ayude al algoritmo a apuntar en la dirección correcta, la mayoría de las veces obteniendo esta información de los sensores odometricos o sensores de inercia, aunque no siempre es posible tener o confiar en esta información, ya que algunos escenarios son complejos y los sensores fallan. Para resolver este problema, esta investigación presenta los usos de los algoritmos de optimización evolutiva, aquellos con una definición meta-heurística basada en la evolución iterativa que imita los procesos de optimización que no necesitan información previa para buscar dentro de un rango limitado el grupo de soluciones que resuelve una función de calidad. El objetivo principal de esta tesis es estudiar, desarrollar y probar los usos de algoritmos de optimización evolutiva en localización y mapeado simultáneos para robots móviles en escenarios de seis grados de libertad utilizando información de sensores LiDAR. Este trabajo introduce varios algoritmos evolutivos que resuelven la correspondencia entre medidas, soluciona el problema de SLAM, implementa una fusion de funciones objetivos y demuestra sus ventajas con pruebas en escenarios reales tanto en interiores, exteriores como mapeado de escenarios subterraneos.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Gerardo Fernández López.- Secretario: María Dolores Blanco Rojas.- Vocal: David Álvarez Sánche

    A hybrid approach to simultaneous localization and mapping in indoors environment

    Get PDF
    This thesis will present SLAM in the current literature to benefit from then it will present the investigation results for a hybrid approach used where different algorithms using laser, sonar, and camera sensors were tested and compared. The contribution of this thesis is the development of a hybrid approach for SLAM that uses different sensors and where different factors are taken into consideration such as dynamic objects, and the development of a scalable grid map model with new sensors models for real time update of the map.The thesis will show the success found, difficulties faced and limitations of the algorithms developed which were simulated and experimentally tested in an indoors environment

    Methods and Devices for Mobile Robot Navigation and Mapping in Unstructured Environments

    Get PDF
    2006/2007The work described in this thesis has been carried out in the context of the exploration of an unknown environment by an autonomous mobile robot. It is rather difficult to imagine a robot that is truly autonomous without being capable of acquiring a model of its environment. This model can be built by the robot exploring the environment and registering the data collected with the sensors over time. In the last decades a lot of progress has been made regarding techniques focused on environments which posses a lot of structure. This thesis contributes to the goal of extending existing techniques to unstructured environments by proposing new methods and devices for mapping in real-time. The first part of the thesis addresses some of the problems of ultrasonic sensors which are widely used in mobile robotics for mapping and obstacle detection during exploration. Ultrasonic sensors have two main shortcomings leading to disappointing performance: uncertainty in target location and multiple reflections. The former is caused by wide beam width and the latter gives erroneous distance measurements because of the insertion of spikes not directly connected to the target. With the aim of registering a detailed contour of the environment surrounding the robot, a sensing device was developed by focusing the ultrasonic beam of the most common ultrasonic sensor to extend its range and improve the spatial resolution. Extended range makes this sensor much more suitable for mapping of outdoor environments which are typically larger. Improved spatial resolution enables the usage of recent laser scan matching techniques on the sonar scans of the environment collected with the sensor. Furthermore, an algorithm is proposed to mitigate some undesirable effects and problems of the ultrasonic sensor. The method registers the acquired raw ultrasonic signal in order to obtain a reliable mapping of the environment. A single sonar measurement consists of a number of pulses reflected by an obstacle. From a series of sensor readings at different sonar angles the sequence of pulses reflected by the environment changes according to the distance between the sensor and the environment. This results in an image of sonar reflections that can be built by representing the reading angle on the horizontal axis and the echoes acquired by the sensor on the vertical one. The characteristics of a sonar emission result in a texture embedded in the image. The algorithm performs a 2D texture analysis of the sonar reflections image in such a way that the texture continuity is analyzed at the overall image scale, thus enabling the correction of the texture continuity by restoring weak or missing reflections. The first part of the algorithm extracts geometric semantic attributes from the image in order to enhance and correct the signal. The second part of the algorithm applies heuristic rules to find the leading pulse of the echo and to estimate the obstacle location in points where otherwise it would not be possible due to noise or lack of signal. The method overcomes inherent problems of ultrasonic sensing in case of high irregularities and missing reflections. It is suitable for map building during mobile robot exploration missions. It's main limitation is small coverage area. This area however increases during exploration as more scans are processed from different positions. Localization and mapping problems were addressed in the second part of the thesis. The main issue in robot self-localization is how to match sensed data, acquired with devices such as laser range finders or ultrasonic range sensors, against reference map information. In particular scan matching techniques are used to correct the accumulated positional error using dead reckoning sensors like odometry and inertial sensors and thus cancel out the effects of noise on localization and mapping. Given the reference scan from a known position and the new scan in unknown or approximately known position, the scan matching algorithm should provide a position estimate which is close to the true robot position from which the new scan was acquired. A genetic based optimization algorithm that solves this problem called GLASM is proposed. It uses a novel fitness function which is based on a look up table requiring little memory to speed the search. Instead of searching for corresponding point pairs and then computing the mean of the distances between them, as in other algorithms, the fitness is directly evaluated by matching points which, after the projection on the same coordinate frame, fall in the search window around the previous scan. It has a linear computational complexity, whereas the algorithms based on correspondences have a quadratic cost. The GLASM algorithm has been compared to it's closest rivals. The results of comparison are reported in the thesis and show, to summarize, that GLASM outperforms them both in speed and in matching success ratio. Glasm is suitable for implementation in feature-poor environments and robust to high sensor noise, as is the case with the sonar readings used in this thesis which are much noisier than laser scanners. The algorithm does not place a high computational burden on the processor, which is important for real world applications where the power consumption is a concern, and scales easily on multiprocessor systems. The algorithm does not require an initial position estimate and is suitable for unstructured environments. In mobile robotics it is critical to evaluate the above mentioned methods and devices in real world applications on systems with limited power and computational resources. In the third part of the thesis some new theoretical results are derived concerning open problems in non-preemptive scheduling of periodic tasks on a uniprocessor. This results are then used to propose a design methodology which is used in an application on a mobile robot. The mobile robot is equipped with an embedded system running a new real-time kernel called Yartek with a non-preemptive scheduler of periodic tasks. The application is described and some preliminary mapping results are presented. The real-time operating system has been developed in a collaborative work for an embedded platform based on a Coldfire microcontroller. The operating system allows the creation and running of tasks and offers a dynamic management of a contiguous memory using a first-fit criterion. The tasks can be real-time periodic scheduled with non-preemptive EDF, or non real-time. In order to improve the usability of the system, a RAM-disk is included: it is actually an array defined in the main memory and managed using pointers, therefore its operation is very fast. The goal was to realize small autonomous embedded system for implementing real-time algorithms for non visual robotic sensors, such as infrared, tactile, inertial devices or ultrasonic proximity sensors. The system provides the processing requested by non visual sensors without imposing a computation burden on the main processor of the robot. In particular, the embedded system described in this thesis provides the robot with the environmental map acquired with the ultrasonic sensors. Yartek has low footprint and low overhead. In order to compare Yartek with another operating system a porting of RTAI for Linux has been performed on the Avnet M5282EVB board and testing procedures were implemented. Tests regarding context switch time, jitter time and interrupt latency time are reported to describe the performance of Yartek. The contributions of this thesis include the presentation of new algorithms and devices, their applications and also some theoretical results. They are briefly summarized as: A focused ultrasonic sensing device is developed and used in mapping applications. An algorithm that processes the ultrasonic readings in order to develop a reliable map of the environment is presented. A new genetic algorithm for scan matching called GLASM is proposed. Schedulability conditions for non-preemptive scheduling in a hard real-time operating system are introduced and a design methodology is proposed. A real-time kernel for embedded systems in mobile robotics is presented. A practical robotic application is described and implementation details and trade-offs are explained.XIX Ciclo197

    Advances in Sonar Technology

    Get PDF
    The demand to explore the largest and also one of the richest parts of our planet, the advances in signal processing promoted by an exponential growth in computation power and a thorough study of sound propagation in the underwater realm, have lead to remarkable advances in sonar technology in the last years.The work on hand is a sum of knowledge of several authors who contributed in various aspects of sonar technology. This book intends to give a broad overview of the advances in sonar technology of the last years that resulted from the research effort of the authors in both sonar systems and their applications. It is intended for scientist and engineers from a variety of backgrounds and even those that never had contact with sonar technology before will find an easy introduction with the topics and principles exposed here

    Simultaneous Localization and Mapping (SLAM) for Autonomous Driving: Concept and Analysis

    Get PDF
    The Simultaneous Localization and Mapping (SLAM) technique has achieved astonishing progress over the last few decades and has generated considerable interest in the autonomous driving community. With its conceptual roots in navigation and mapping, SLAM outperforms some traditional positioning and localization techniques since it can support more reliable and robust localization, planning, and controlling to meet some key criteria for autonomous driving. In this study the authors first give an overview of the different SLAM implementation approaches and then discuss the applications of SLAM for autonomous driving with respect to different driving scenarios, vehicle system components and the characteristics of the SLAM approaches. The authors then discuss some challenging issues and current solutions when applying SLAM for autonomous driving. Some quantitative quality analysis means to evaluate the characteristics and performance of SLAM systems and to monitor the risk in SLAM estimation are reviewed. In addition, this study describes a real-world road test to demonstrate a multi-sensor-based modernized SLAM procedure for autonomous driving. The numerical results show that a high-precision 3D point cloud map can be generated by the SLAM procedure with the integration of Lidar and GNSS/INS. Online four–five cm accuracy localization solution can be achieved based on this pre-generated map and online Lidar scan matching with a tightly fused inertial system

    Mapping of complex marine environments using an unmanned surface craft

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 185-199).Recent technology has combined accurate GPS localization with mapping to build 3D maps in a diverse range of terrestrial environments, but the mapping of marine environments lags behind. This is particularly true in shallow water and coastal areas with man-made structures such as bridges, piers, and marinas, which can pose formidable challenges to autonomous underwater vehicle (AUV) operations. In this thesis, we propose a new approach for mapping shallow water marine environments, combining data from both above and below the water in a robust probabilistic state estimation framework. The ability to rapidly acquire detailed maps of these environments would have many applications, including surveillance, environmental monitoring, forensic search, and disaster recovery. Whereas most recent AUV mapping research has been limited to open waters, far from man-made surface structures, in our work we focus on complex shallow water environments, such as rivers and harbors, where man-made structures block GPS signals and pose hazards to navigation. Our goal is to enable an autonomous surface craft to combine data from the heterogeneous environments above and below the water surface - as if the water were drained, and we had a complete integrated model of the marine environment, with full visibility. To tackle this problem, we propose a new framework for 3D SLAM in marine environments that combines data obtained concurrently from above and below the water in a robust probabilistic state estimation framework. Our work makes systems, algorithmic, and experimental contributions in perceptual robotics for the marine environment. We have created a novel Autonomous Surface Vehicle (ASV), equipped with substantial onboard computation and an extensive sensor suite that includes three SICK lidars, a Blueview MB2250 imaging sonar, a Doppler Velocity Log, and an integrated global positioning system/inertial measurement unit (GPS/IMU) device. The data from these sensors is processed in a hybrid metric/topological SLAM state estimation framework. A key challenge to mapping is extracting effective constraints from 3D lidar data despite GPS loss and reacquisition. This was achieved by developing a GPS trust engine that uses a semi-supervised learning classifier to ascertain the validity of GPS information for different segments of the vehicle trajectory. This eliminates the troublesome effects of multipath on the vehicle trajectory estimate, and provides cues for submap decomposition. Localization from lidar point clouds is performed using octrees combined with Iterative Closest Point (ICP) matching, which provides constraints between submaps both within and across different mapping sessions. Submap positions are optimized via least squares optimization of the graph of constraints, to achieve global alignment. The global vehicle trajectory is used for subsea sonar bathymetric map generation and for mesh reconstruction from lidar data for 3D visualization of above-water structures. We present experimental results in the vicinity of several structures spanning or along the Charles River between Boston and Cambridge, MA. The Harvard and Longfellow Bridges, three sailing pavilions and a yacht club provide structures of interest, having both extensive superstructure and subsurface foundations. To quantitatively assess the mapping error, we compare against a georeferenced model of the Harvard Bridge using blueprints from the Library of Congress. Our results demonstrate the potential of this new approach to achieve robust and efficient model capture for complex shallow-water marine environments. Future work aims to incorporate autonomy for path planning of a region of interest while performing collision avoidance to enable fully autonomous surveys that achieve full sensor coverage of a complete marine environment.by Jacques Chadwick Leedekerken.Ph.D

    Optimal Wheelchair Multi-LiDAR Placement for Indoor SLAM

    Get PDF
    One of the most prevalent technologies used in modern robotics is Simultaneous Localization and Mapping or, SLAM. Modern SLAM technologies usually employ a number of different probabilistic mathematics to perform processes that enable modern robots to not only map an environment but, also, concurrently localize themselves within said environment. Existing open-source SLAM technologies not only range in the different probabilistic methods they employ to achieve their task but, also, by how well the task is achieved and by their computational requirements. Additionally, the positioning of the sensors in the robot also has a substantial effect on how well these technologies work. Therefore, this dissertation is dedicated to the comparison of existing open-source ROS implemented 2D SLAM technologies and in the maximization of the performance of said SLAM technologies by researching optimal sensor placement in a Intelligent Wheelchair context, using SLAM performance as a benchmark

    Recent Developments in Monocular SLAM within the HRI Framework

    Get PDF
    This chapter describes an approach to improve the feature initialization process in the delayed inverse-depth feature initialization monocular Simultaneous Localisation and Mapping (SLAM), using data provided by a robot’s camera plus an additional monocular sensor deployed in the headwear of the human component in a human-robot collaborative exploratory team. The robot and the human deploy a set of sensors that once combined provides the data required to localize the secondary camera worn by the human. The approach and its implementation are described along with experimental results demonstrating its performance. A discussion on the usual sensors within the robotics field, especially in SLAM, provides background to the advantages and capabilities of the system implemented in this research

    3D reconstruction and motion estimation using forward looking sonar

    Get PDF
    Autonomous Underwater Vehicles (AUVs) are increasingly used in different domains including archaeology, oil and gas industry, coral reef monitoring, harbour’s security, and mine countermeasure missions. As electromagnetic signals do not penetrate underwater environment, GPS signals cannot be used for AUV navigation, and optical cameras have very short range underwater which limits their use in most underwater environments. Motion estimation for AUVs is a critical requirement for successful vehicle recovery and meaningful data collection. Classical inertial sensors, usually used for AUV motion estimation, suffer from large drift error. On the other hand, accurate inertial sensors are very expensive which limits their deployment to costly AUVs. Furthermore, acoustic positioning systems (APS) used for AUV navigation require costly installation and calibration. Moreover, they have poor performance in terms of the inferred resolution. Underwater 3D imaging is another challenge in AUV industry as 3D information is increasingly demanded to accomplish different AUV missions. Different systems have been proposed for underwater 3D imaging, such as planar-array sonar and T-configured 3D sonar. While the former features good resolution in general, it is very expensive and requires huge computational power, the later is cheaper implementation but requires long time for full 3D scan even in short ranges. In this thesis, we aim to tackle AUV motion estimation and underwater 3D imaging by proposing relatively affordable methodologies and study different parameters affecting their performance. We introduce a new motion estimation framework for AUVs which relies on the successive acoustic images to infer AUV ego-motion. Also, we propose an Acoustic Stereo Imaging (ASI) system for underwater 3D reconstruction based on forward looking sonars; the proposed system features cheaper implementation than planar array sonars and solves the delay problem in T configured 3D sonars
    corecore