3 research outputs found

    3D mapping and path planning from range data

    Get PDF
    This thesis reports research on mapping, terrain classification and path planning. These are classical problems in robotics, typically studied independently, and here we link such problems by framing them within a common proprioceptive modality, that of three-dimensional laser range scanning. The ultimate goal is to deliver navigation paths for challenging mobile robotics scenarios. For this reason we also deliver safe traversable regions from a previously computed globally consistent map. We first examine the problem of registering dense point clouds acquired at different instances in time. We contribute with a novel range registration mechanism for pairs of 3D range scans using point-to-point and point-to-line correspondences in a hierarchical correspondence search strategy. For the minimization we adopt a metric that takes into account not only the distance between corresponding points, but also the orientation of their relative reference frames. We also propose FaMSA, a fast technique for multi-scan point cloud alignment that takes advantage of the asserted point correspondences during sequential scan matching, using the point match history to speed up the computation of new scan matches. To properly propagate the model of the sensor noise and the scan matching, we employ first order error propagation, and to correct the error accumulation from local data alignment, we consider the probabilistic alignment of 3D point clouds using a delayed-state Extended Information Filter (EIF). In this thesis we adapt the Pose SLAM algorithm to the case of 3D range mapping, Pose SLAM is the variant of SLAM where only the robot trajectory is estimated and where sensor data is solely used to produce relative constraints between robot poses. These dense mapping techniques are tested in several scenarios acquired with our 3D sensors, producing impressively rich 3D environment models. The computed maps are then processed to identify traversable regions and to plan navigation sequences. In this thesis we present a pair of methods to attain high-level off-line classification of traversable areas, in which training data is acquired automatically from navigation sequences. Traversable features came from the robot footprint samples during manual robot motion, allowing us to capture terrain constrains not easy to model. Using only some of the traversed areas as positive training samples, our algorithms are tested in real scenarios to find the rest of the traversable terrain, and are compared with a naive parametric and some variants of the Support Vector Machine. Later, we contribute with a path planner that guarantees reachability at a desired robot pose with significantly lower computation time than competing alternatives. To search for the best path, our planner incrementally builds a tree using the A* algorithm, it includes a hybrid cost policy to efficiently expand the search tree, combining random sampling from the continuous space of kinematically feasible motion commands with a cost to goal metric that also takes into account the vehicle nonholonomic constraints. The planer also allows for node rewiring, and to speed up node search, our method includes heuristics that penalize node expansion near obstacles, and that limit the number of explored nodes. The method book-keeps visited cells in the configuration space, and disallows node expansion at those configurations in the first full iteration of the algorithm. We validate the proposed methods with experiments in extensive real scenarios from different very complex 3D outdoors environments, and compare it with other techniques such as the A*, RRT and RRT* algorithms.Esta tesis reporta investigación sobre el mapeo, clasificación de terreno y planificación de trayectorias. Estos son problemas clásicos en robótica los cuales generalmente se estudian de forma independiente, aquí se vinculan enmarcandolos con una modalidad propioceptiva común: un láser de rango 3D. El objetivo final es ofrecer trayectorias de navegación para escenarios complejos en el marco de la robótica móvil. Por esta razón también entregamos regiones transitables en un mapa global consistente calculado previamente. Primero examinamos el problema de registro de nubes de puntos adquiridas en diferentes instancias de tiempo. Contribuimos con un novedoso mecanismo de registro de pares de imagenes de rango 3D usando correspondencias punto a punto y punto a línea, en una estrategia de búsqueda de correspondencias jerárquica. Para la minimización optamos por una metrica que considera no sólo la distancia entre puntos, sino también la orientación de los marcos de referencia relativos. También proponemos FAMSA, una técnica para el registro rápido simultaneo de multiples nubes de puntos, la cual aprovecha las correspondencias de puntos obtenidas durante el registro secuencial, usando inicialmente la historia de correspondencias para acelerar el cálculo de las correspondecias en los nuevos registros de imagenes. Para propagar adecuadamente el modelo del ruido del sensor y del registro de imagenes, empleamos la propagación de error de primer orden, y para corregir el error acumulado del registro local, consideramos la alineación probabilística de nubes de puntos 3D utilizando un Filtro Extendido de Información de estados retrasados. En esta tesis adaptamos el algóritmo Pose SLAM para el caso de mapas con imagenes de rango 3D, Pose SLAM es la variante de SLAM donde solamente se estima la trayectoria del robot, usando los datos del sensor como restricciones relativas entre las poses robot. Estas técnicas de mapeo se prueban en varios escenarios adquiridos con nuestros sensores 3D produciendo modelos 3D impresionantes. Los mapas obtenidos se procesan para identificar regiones navegables y para planificar secuencias de navegación. Presentamos un par de métodos para lograr la clasificación de zonas transitables fuera de línea. Los datos de entrenamiento se adquieren de forma automática usando secuencias de navegación obtenidas manualmente. Las características transitables se captan de las huella de la trayectoria del robot, lo cual permite capturar restricciones del terreno difíciles de modelar. Con sólo algunas de las zonas transitables como muestras de entrenamiento positivo, nuestros algoritmos se prueban en escenarios reales para encontrar el resto del terreno transitable. Los algoritmos se comparan con algunas variantes de la máquina de soporte de vectores (SVM) y una parametrizacion ingenua. También, contribuimos con un planificador de trayectorias que garantiza llegar a una posicion deseada del robot en significante menor tiempo de cálculo a otras alternativas. Para buscar el mejor camino, nuestro planificador emplea un arbol de busqueda incremental basado en el algoritmo A*. Incluimos una póliza de coste híbrido para crecer de manera eficiente el árbol, combinando el muestro aleatorio del espacio continuo de comandos cinemáticos del robot con una métrica de coste al objetivo que también concidera las cinemática del robot. El planificador además permite reconectado de nodos, y, para acelerar la búsqueda de nodos, se incluye una heurística que penaliza la expansión de nodos cerca de los obstáculos, que limita el número de nodos explorados. El método conoce las céldas que ha visitado del espacio de configuraciones, evitando la expansión de nodos en configuraciones que han sido vistadas en la primera iteración completa del algoritmo. Los métodos propuestos se validán con amplios experimentos con escenarios reales en diferentes entornos exteriores, asi como su comparación con otras técnicas como los algoritmos A*, RRT y RRT*.Postprint (published version

    Efficient Use of 3D Environment Models for Mobile Robot Simulation and Localization

    Get PDF
    Trabajo presentado al 2nd SIMPAR celebrado en Alemania del 15 al 18 de noviembre de 2010.This paper provides a detailed description of a set of algorithms to efficiently manipulate 3D geometric models to compute physical constraints and range observation models, data that is usually required in real-time mobile robotics or simulation. Our approach uses a standard file format to describe the environment and processes the model using the openGL library, a widely-used programming interface for 3D scene manipulation. The paper also presents results on a test model for benchmarking, and on a model of a real urban environment, where the algorithms have been effectively used for real-time localization in a large urban setting.Peer reviewe

    Pose estimation and data fusion algorithms for an autonomous mobile robot based on vision and IMU in an indoor environment

    Get PDF
    Thesis (PhD(Computer Engineering))--University of Pretoria, 2021.Autonomous mobile robots became an active research direction during the past few years, and they are emerging in different sectors such as companies, industries, hospital, institutions, agriculture and homes to improve services and daily activities. Due to technology advancement, the demand for mobile robot has increased due to the task they perform and services they render such as carrying heavy objects, monitoring, delivering of goods, search and rescue missions, performing dangerous tasks in places like underground mines. Instead of workers being exposed to hazardous chemicals or environments that could affect health and put lives at risk, humans are being replaced with mobile robot services. It is with these concerns that the enhancement of mobile robot operation is necessary, and the process is assisted through sensors. Sensors are used as instrument to collect data or information that aids the robot to navigate and localise in its environment. Each sensor type has inherent strengths and weaknesses, therefore inappropriate combination of sensors could result into high cost of sensor deployment with low performance. Regardless, the potential and prospect of autonomous mobile robot, they are yet to attain optimal performance, this is because of integral challenges they are faced with most especially localisation. Localisation is one the fundamental issues encountered in mobile robot which demands attention and the challenging part is estimating the robot position and orientation of which this information can be acquired from sensors and other relevant systems. To tackle the issue of localisation, a good technique should be proposed to deal with errors, downgrading factors, improper measurement and estimations. Different approaches are recommended in estimating the position of a mobile robot. Some studies estimated the trajectory of the mobile robot and indoor scene reconstruction using a monocular visual odmometry. This approach cannot be feasible for large zone and complex environment. Radio frequency identification (RFID) technology on the other hand provides accuracy and robustness, but the method depend on the distance between the tags, and the distance between the tags and the reader. To increase the localisation accuracy, the number of RFID tags per unit area has to be increased. Therefore, this technique may not result in economical and easily scalable solution because of the increasing number of required tags and the associated cost of their deployment. Global Positioning System (GPS) is another approach that offers proved results in most scenarios, however, indoor localization is one of the settings in which GPS cannot be used because the signal strength is not reliable inside a building. Most approaches are not able to precisely localise autonomous mobile robot even with the high cost of equipment and complex implementation. Most the devices and sensors either requires additional infrastructures or they are not suitable to be used in an indoor environment. Therefore, this study proposes using data from vision and inertial sensors which comprise 3-axis of accelerometer and 3-axis of gyroscope, also known as 6-degree of freedom (6-DOF) to determine pose estimation of mobile robot. The inertial measurement unit (IMU) based tracking provides fast response, therefore, they can be considered to assist vision whenever it fails due to loss of visual features. The use of vision sensor helps to overcome the characteristic limitation of the acoustic sensor for simultaneous multiple object tracking. With this merit, vision is capable of estimating pose with respect to the object of interest. A singular sensor or system is not reliable to estimate the pose of a mobile robot due to limitations, therefore, data acquired from sensors and sources are combined using data fusion algorithm to estimate position and orientation within specific environment. The resulting model is more accurate because it balances the strengths of the different sensors. Information provided through sensor or data fusion can be used to support more-intelligent actions. The proposed algorithms are expedient to combine data from each of the sensor types to provide the most comprehensive and accurate environmental model possible. The algorithms use a set of mathematical equations that provides an efficient computational means to estimate the state of a process. This study investigates the state estimation methods to determine the state of a desired system that is continuously changing given some observations or measurements. From the performance and evaluation of the system, it can be observed that the integration of sources of information and sensors is necessary. This thesis has provided viable solutions to the challenging problem of localisation in autonomous mobile robot through its adaptability, accuracy, robustness and effectiveness.NRFUniversity of PretoriaElectrical, Electronic and Computer EngineeringPhD(Computer Engineering)Unrestricte
    corecore