765 research outputs found

    Outdoor navigation of mobile robots

    Get PDF
    AGVs in the manufacturing industry currently constitute the largest application area for mobile robots. Other applications have been gradually emerging, including various transporting tasks in demanding environments, such as mines or harbours. Most of the new potential applications require a free-ranging navigation system, which means that the path of a robot is no longer bound to follow a buried inductive cable. Moreover, changing the route of a robot or taking a new working area into use must be as effective as possible. These requirements set new challenges for the navigation systems of mobile robots. One of the basic methods of building a free ranging navigation system is to combine dead reckoning navigation with the detection of beacons at known locations. This approach is the backbone of the navigation systems in this study. The study describes research and development work in the area of mobile robotics including the applications in forestry, agriculture, mining, and transportation in a factory yard. The focus is on describing navigation sensors and methods for position and heading estimation by fusing dead reckoning and beacon detection information. A Kalman filter is typically used here for sensor fusion. Both cases of using either artificial or natural beacons have been covered. Artificial beacons used in the research and development projects include specially designed flat objects to be detected using a camera as the detection sensor, GPS satellite positioning system, and passive transponders buried in the ground along the route of a robot. The walls in a mine tunnel have been used as natural beacons. In this case, special attention has been paid to map building and using the map for positioning. The main contribution of the study is in describing the structure of a working navigation system, including positioning and position control. The navigation system for mining application, in particular, contains some unique features that provide an easy-to-use procedure for taking new production areas into use and making it possible to drive a heavy mining machine autonomously at speed comparable to an experienced human driver.reviewe

    Grid-based localization and local mapping with moving object detection and tracking

    Get PDF
    International audienceWe present a real-time algorithm for simultaneous localization and local mapping (local SLAM) with detection and tracking of moving objects (DATMO) in dynamic outdoor environments from a moving vehicle equipped with a laser scanner, short-range radars and odometry. To correct the vehicle odometry we introduce a new fast implementation of incremental scan matching method that can work reliably in dynamic outdoor environments. After obtaining a good vehicle localization, the map surrounding of the vehicle is updated incrementally and moving objects are detected without a priori knowledge of the targets. Detected moving objects are finally tracked by a Multiple Hypothesis Tracker (MHT) coupled with an adaptive Interacting Multiple Model (IMM) filter. The experimental results on datasets collected from different scenarios such as: urban streets, country roads and highways demonstrate the efficiency of the proposed algorithm

    Fusion de données multi capteurs pour la détection et le suivi d'objets mobiles à partir d'un véhicule autonome

    Get PDF
    La perception est un point clé pour le fonctionnement d'un véhicule autonome ou même pour un véhicule fournissant des fonctions d'assistance. Un véhicule observe le monde externe à l'aide de capteurs et construit un modèle interne de l'environnement extérieur. Il met à jour en continu ce modèle de l'environnement en utilisant les dernières données des capteurs. Dans ce cadre, la perception peut être divisée en deux étapes : la première partie, appelée SLAM (Simultaneous Localization And Mapping) s'intéresse à la construction d'une carte de l'environnement extérieur et à la localisation du véhicule hôte dans cette carte, et deuxième partie traite de la détection et du suivi des objets mobiles dans l'environnement (DATMO pour Detection And Tracking of Moving Objects). En utilisant des capteurs laser de grande précision, des résultats importants ont été obtenus par les chercheurs. Cependant, avec des capteurs laser de faible résolution et des données bruitées, le problème est toujours ouvert, en particulier le problème du DATMO. Dans cette thèse nous proposons d'utiliser la vision (mono ou stéréo) couplée à un capteur laser pour résoudre ce problème. La première contribution de cette thèse porte sur l'identification et le développement de trois niveaux de fusion. En fonction du niveau de traitement de l'information capteur avant le processus de fusion, nous les appelons "fusion bas niveau", "fusion au niveau de la détection" et "fusion au niveau du suivi". Pour la fusion bas niveau, nous avons utilisé les grilles d'occupations. Pour la fusion au niveau de la détection, les objets détectés par chaque capteur sont fusionnés pour avoir une liste d'objets fusionnés. La fusion au niveau du suivi requiert le suivi des objets pour chaque capteur et ensuite on réalise la fusion entre les listes d'objets suivis. La deuxième contribution de cette thèse est le développement d'une technique rapide pour trouver les bords de route à partir des données du laser et en utilisant cette information nous supprimons de nombreuses fausses alarmes. Nous avons en effet observé que beaucoup de fausses alarmes apparaissent sur le bord de la route. La troisième contribution de cette thèse est le développement d'une solution complète pour la perception avec un capteur laser et des caméras stéréo-vision et son intégration sur un démonstrateur du projet européen Intersafe-2. Ce projet s'intéresse à la sécurité aux intersections et vise à y réduire les blessures et les accidents mortels. Dans ce projet, nous avons travaillé en collaboration avec Volkswagen, l'Université Technique de Cluj-Napoca, en Roumanie et l'INRIA Paris pour fournir une solution complète de perception et d'évaluation des risques pour le démonstrateur de Volkswagen.Perception is one of important steps for the functioning of an autonomous vehicle or even for a vehicle providing only driver assistance functions. Vehicle observes the external world using its sensors and builds an internal model of the outer environment configuration. It keeps on updating this internal model using latest sensor data. In this setting perception can be divided into two sub parts: first part, called SLAM(Simultaneous Localization And Mapping), is concerned with building an online map of the external environment and localizing the host vehicle in this map, and second part deals with finding moving objects in the environment and tracking them over time and is called DATMO(Detection And Tracking of Moving Objects). Using high resolution and accurate laser scanners successful efforts have been made by many researchers to solve these problems. However, with low resolution or noisy laser scanners solving these problems, especially DATMO, is still a challenge and there are either many false alarms, miss detections or both. In this thesis we propose that by using vision sensor (mono or stereo) along with laser sensor and by developing an effective fusion scheme on an appropriate level, these problems can be greatly reduced. The main contribution of this research is concerned with the identification of three fusion levels and development of fusion techniques for each level for SLAM and DATMO based perception architecture of autonomous vehicles. Depending on the amount of preprocessing required before fusion for each level, we call them low level, object detection level and track level fusion. For low level we propose to use grid based fusion technique and by giving appropriate weights (depending on the sensor properties) to each grid for each sensor a fused grid can be obtained giving better view of the external environment in some sense. For object detection level fusion, lists of objects detected for each sensor are fused to get a list of fused objects where fused objects have more information then their previous versions. We use a Bayesian fusion technique for this level. Track level fusion requires to track moving objects for each sensor separately and then do a fusion between tracks to get fused tracks. Fusion at this level helps remove false tracks. Second contribution of this research is the development of a fast technique of finding road borders from noisy laser data and then using these border information to remove false moving objects. Usually we have observed that many false moving objects appear near the road borders due to sensor noise. If they are not filtered out then they result into many false tracks close to vehicle making vehicle to apply breaks or to issue warning messages to the driver falsely. Third contribution is the development of a complete perception solution for lidar and stereo vision sensors and its intigration on a real vehicle demonstrator used for a European Union project (INTERSAFE-21). This project is concerned with the safety at intersections and aims at the reduction of injury and fatal accidents there. In this project we worked in collaboration with Volkswagen, Technical university of Cluj-Napoca Romania and INRIA Paris to provide a complete perception and risk assessment solution for this project.SAVOIE-SCD - Bib.électronique (730659901) / SudocGRENOBLE1/INP-Bib.électronique (384210012) / SudocGRENOBLE2/3-Bib.électronique (384219901) / SudocSudocFranceF

    Laser-Based Detection and Tracking of Moving Obstacles to Improve Perception of Unmanned Ground Vehicles

    Get PDF
    El objetivo de esta tesis es desarrollar un sistema que mejore la etapa de percepción de vehículos terrestres no tripulados (UGVs) heterogéneos, consiguiendo con ello una navegación robusta en términos de seguridad y ahorro energético en diferentes entornos reales, tanto interiores como exteriores. La percepción debe tratar con obstáculos estáticos y dinámicos empleando sensores heterogéneos, tales como, odometría, sensor de distancia láser (LIDAR), unidad de medida inercial (IMU) y sistema de posicionamiento global (GPS), para obtener la información del entorno con la precisión más alta, permitiendo mejorar las etapas de planificación y evitación de obstáculos. Para conseguir este objetivo, se propone una etapa de mapeado de obstáculos dinámicos (DOMap) que contiene la información de los obstáculos estáticos y dinámicos. La propuesta se basa en una extensión del filtro de ocupación bayesiana (BOF) incluyendo velocidades no discretizadas. La detección de velocidades se obtiene con Flujo Óptico sobre una rejilla de medidas LIDAR discretizadas. Además, se gestionan las oclusiones entre obstáculos y se añade una etapa de seguimiento multi-hipótesis, mejorando la robustez de la propuesta (iDOMap). La propuesta ha sido probada en entornos simulados y reales con diferentes plataformas robóticas, incluyendo plataformas comerciales y la plataforma (PROPINA) desarrollada en esta tesis para mejorar la colaboración entre equipos de humanos y robots dentro del proyecto ABSYNTHE. Finalmente, se han propuesto métodos para calibrar la posición del LIDAR y mejorar la odometría con una IMU

    Active Mapping and Robot Exploration: A Survey

    Get PDF
    Simultaneous localization and mapping responds to the problem of building a map of the environment without any prior information and based on the data obtained from one or more sensors. In most situations, the robot is driven by a human operator, but some systems are capable of navigating autonomously while mapping, which is called native simultaneous localization and mapping. This strategy focuses on actively calculating the trajectories to explore the environment while building a map with a minimum error. In this paper, a comprehensive review of the research work developed in this field is provided, targeting the most relevant contributions in indoor mobile robotics.This research was funded by the ELKARTEK project ELKARBOT KK-2020/00092 of the Basque Government

    Automated 3D model generation for urban environments [online]

    Get PDF
    Abstract In this thesis, we present a fast approach to automated generation of textured 3D city models with both high details at ground level and complete coverage for birds-eye view. A ground-based facade model is acquired by driving a vehicle equipped with two 2D laser scanners and a digital camera under normal traffic conditions on public roads. One scanner is mounted horizontally and is used to determine the approximate component of relative motion along the movement of the acquisition vehicle via scan matching; the obtained relative motion estimates are concatenated to form an initial path. Assuming that features such as buildings are visible from both ground-based and airborne view, this initial path is globally corrected by Monte-Carlo Localization techniques using an aerial photograph or a Digital Surface Model as a global map. The second scanner is mounted vertically and is used to capture the 3D shape of the building facades. Applying a series of automated processing steps, a texture-mapped 3D facade model is reconstructed from the vertical laser scans and the camera images. In order to obtain an airborne model containing the roof and terrain shape complementary to the facade model, a Digital Surface Model is created from airborne laser scans, then triangulated, and finally texturemapped with aerial imagery. Finally, the facade model and the airborne model are fused to one single model usable for both walk- and fly-thrus. The developed algorithms are evaluated on a large data set acquired in downtown Berkeley, and the results are shown and discussed

    Semantic Mapping of Road Scenes

    Get PDF
    The problem of understanding road scenes has been on the fore-front in the computer vision community for the last couple of years. This enables autonomous systems to navigate and understand the surroundings in which it operates. It involves reconstructing the scene and estimating the objects present in it, such as ‘vehicles’, ‘road’, ‘pavements’ and ‘buildings’. This thesis focusses on these aspects and proposes solutions to address them. First, we propose a solution to generate a dense semantic map from multiple street-level images. This map can be imagined as the bird’s eye view of the region with associated semantic labels for ten’s of kilometres of street level data. We generate the overhead semantic view from street level images. This is in contrast to existing approaches using satellite/overhead imagery for classification of urban region, allowing us to produce a detailed semantic map for a large scale urban area. Then we describe a method to perform large scale dense 3D reconstruction of road scenes with associated semantic labels. Our method fuses the depth-maps in an online fashion, generated from the stereo pairs across time into a global 3D volume, in order to accommodate arbitrarily long image sequences. The object class labels estimated from the street level stereo image sequence are used to annotate the reconstructed volume. Then we exploit the scene structure in object class labelling by performing inference over the meshed representation of the scene. By performing labelling over the mesh we solve two issues: Firstly, images often have redundant information with multiple images describing the same scene. Solving these images separately is slow, where our method is approximately a magnitude faster in the inference stage compared to normal inference in the image domain. Secondly, often multiple images, even though they describe the same scene result in inconsistent labelling. By solving a single mesh, we remove the inconsistency of labelling across the images. Also our mesh based labelling takes into account of the object layout in the scene, which is often ambiguous in the image domain, thereby increasing the accuracy of object labelling. Finally, we perform labelling and structure computation through a hierarchical robust PN Markov Random Field defined on voxels and super-voxels given by an octree. This allows us to infer the 3D structure and the object-class labels in a principled manner, through bounded approximate minimisation of a well defined and studied energy functional. In this thesis, we also introduce two object labelled datasets created from real world data. The 15 kilometre Yotta Labelled dataset consists of 8,000 images per camera view of the roadways of the United Kingdom with a subset of them annotated with object class labels and the second dataset is comprised of ground truth object labels for the publicly available KITTI dataset. Both the datasets are available publicly and we hope will be helpful to the vision research community
    corecore