559 research outputs found
Contribution à la localisation de véhicules intelligents à partir de marquage routier
Autonomous Vehicles (AV) applications and Advanced Driving Assistance Systems (ADAS) relay in scene understanding processes allowing high level systems to carry out decision marking. For such systems, the localization of a vehicle evolving in a structured dynamic environment constitutes a complex problem of crucial importance. Our research addresses scene structure detection, localization and error modeling. Taking into account the large functional spectrum of vision systems, the accessibility of Open Geographical Information Systems (GIS) and the widely presence of Global Positioning Systems (GPS) onboard vehicles, we study the performance and the reliability of a vehicle localization method combining such information sources. Monocular vision–based lane marking detection provides key information about the scene structure. Using an enhanced multi-kernel framework with hierarchical weights, the proposed parametric method performs, in real time, the detection and tracking of the ego-lane marking. A self-assessment indicator quantifies the confidence of this information source. We conduct our investigations in a localization system which tightly couples GPS, GIS and lane makings in the probabilistic framework of Particle Filter (PF). To this end, it is proposed the use of lane markings not only during the map-matching process but also to model the expected ego-vehicle motion. The reliability of the localization system, in presence of unusual errors from the different information sources, is enhanced by taking into account different confidence indicators. Such a mechanism is later employed to identify error sources. This research concludes with an experimental validation in real driving situations of the proposed methods. They were tested and its performance was quantified using an experimental vehicle and publicly available datasets.Les applications pour véhicules autonomes et les systèmes d’aide avancée à la conduite (Advanced Driving Assistance Systems - ADAS) mettent en oeuvre des processus permettant à des systèmes haut niveau de réaliser une prise de décision. Pour de tels systèmes, la connaissance du positionnement précis (ou localisation) du véhicule dans son environnement est un pré-requis nécessaire. Cette thèse s’intéresse à la détection de la structure de scène, au processus de localisation ainsi qu’à la modélisation d’erreurs. A partir d’un large spectre fonctionnel de systèmes de vision, de l’accessibilité d’un système de cartographie ouvert (Open Geographical Information Systems - GIS) et de la large diffusion des systèmes de positionnement dans les véhicules (Global Positioning System - GPS), cette thèse étudie la performance et la fiabilité d’une méthode de localisation utilisant ces différentes sources. La détection de marquage sur la route réalisée par caméra monoculaire est le point de départ permettant de connaître la structure de la scène. En utilisant, une détection multi-noyau avec pondération hiérarchique, la méthode paramétrique proposée effectue la détection et le suivi des marquages sur la voie du véhicule en temps réel. La confiance en cette source d’information a été quantifiée par un indicateur de vraisemblance. Nous proposons ensuite un système de localisation qui fusionne des informations de positionnement (GPS), la carte (GIS) et les marquages détectés précédemment dans un cadre probabiliste basé sur un filtre particulaire. Pour ce faire, nous proposons d’utiliser les marquages détectés non seulement dans l’étape de mise en correspondance des cartes mais aussi dans la modélisation de la trajectoire attendue du véhicule. La fiabilité du système de localisation, en présence d’erreurs inhabituelles dans les différentes sources d’information, est améliorée par la prise en compte de différents indicateurs de confiance. Ce mécanisme est par la suite utilisé pour identifier les sources d’erreur. Cette thèse se conclut par une validation expérimentale des méthodes proposées dans des situations réelles de conduite. Leurs performances ont été quantifiées en utilisant un véhicule expérimental et des données en libre accès sur internet
Benchmarking Particle Filter Algorithms for Efficient Velodyne-Based Vehicle Localization
Keeping a vehicle well-localized within a prebuilt-map is at the core of any autonomous vehicle navigation system. In this work, we show that both standard SIR sampling and rejection-based optimal sampling are suitable for efficient (10 to 20 ms) real-time pose tracking without feature detection that is using raw point clouds from a 3D LiDAR. Motivated by the large amount of information captured by these sensors, we perform a systematic statistical analysis of how many points are actually required to reach an optimal ratio between efficiency and positioning accuracy. Furthermore, initialization from adverse conditions, e.g., poor GPS signal in urban canyons, we also identify the optimal particle filter settings required to ensure convergence. Our findings include that a decimation factor between 100 and 200 on incoming point clouds provides a large savings in computational cost with a negligible loss in localization accuracy for a VLP-16 scanner. Furthermore, an initial density of ∼2 particles/m 2 is required to achieve 100% convergence success for large-scale (∼100,000 m 2 ), outdoor global localization without any additional hint from GPS or magnetic field sensors. All implementations have been released as open-source software
Development and evaluation of low cost 2-d lidar based traffic data collection methods
Traffic data collection is one of the essential components of a transportation planning exercise. Granular traffic data such as volume count, vehicle classification, speed measurement, and occupancy, allows managing transportation systems more effectively. For effective traffic operation and management, authorities require deploying many sensors across the network. Moreover, the ascending efforts to achieve smart transportation aspects put immense pressure on planning authorities to deploy more sensors to cover an extensive network. This research focuses on the development and evaluation of inexpensive data collection methodology by using two-dimensional (2-D) Light Detection and Ranging (LiDAR) technology. LiDAR is adopted since it is economical and easily accessible technology. Moreover, its 360-degree visibility and accurate distance information make it more reliable.
To collect traffic count data, the proposed method integrates a Continuous Wavelet Transform (CWT), and Support Vector Machine (SVM) into a single framework. Proof-of-Concept (POC) test is conducted in three different places in Newark, New Jersey to examine the performance of the proposed method. The POC test results demonstrate that the proposed method achieves acceptable performances, resulting in 83% ~ 94% accuracy. It is discovered that the proposed method\u27s accuracy is affected by the color of the exterior surface of a vehicle since some colored surfaces do not produce enough reflective rays. It is noticed that the blue and black colors are less reflective, while white-colored surfaces produce high reflective rays.
A methodology is proposed that comprises K-means clustering, inverse sensor model, and Kalman filter to obtain trajectories of the vehicles at the intersections. The primary purpose of vehicle detection and tracking is to obtain the turning movement counts at an intersection. A K-means clustering is an unsupervised machine learning technique that clusters the data into different groups by analyzing the smallest mean of a data point from the centroid. The ultimate objective of applying K-mean clustering is to identify the difference between pedestrians and vehicles. An inverse sensor model is a state model of occupancy grid mapping that localizes the detected vehicles on the grid map. A constant velocity model based Kalman filter is defined to track the trajectory of the vehicles. The data are collected from two intersections located in Newark, New Jersey, to study the accuracy of the proposed method. The results show that the proposed method has an average accuracy of 83.75%. Furthermore, the obtained R-squared value for localization of the vehicles on the grid map is ranging between 0.87 to 0.89.
Furthermore, a primary cost comparison is made to study the cost efficiency of the developed methodology. The cost comparison shows that the proposed methodology based on 2-D LiDAR technology can achieve acceptable accuracy at a low price and be considered a smart city concept to conduct extensive scale data collection
Perception and intelligent localization for autonomous driving
Mestrado em Engenharia de Computadores e TelemáticaVisão por computador e fusão sensorial são temas relativamente recentes, no entanto largamente adoptados no desenvolvimento de robôs autónomos que exigem adaptabilidade ao seu ambiente envolvente. Esta dissertação foca-se numa abordagem a estes dois temas para alcançar percepção no contexto de condução autónoma. O uso de câmaras para atingir este fim é um
processo bastante complexo. Ao contrário dos meios sensoriais clássicos que fornecem sempre o mesmo tipo de informação precisa e atingida de forma determinÃstica, as sucessivas imagens adquiridas por uma câmara estão repletas
da mais variada informação e toda esta ambÃgua e extremamente difÃcil de extrair. A utilização de câmaras como meio sensorial em robótica
é o mais próximo que chegamos na semelhança com aquele que é o de maior importância no processo de percepção humana, o sistema de visão. Visão por computador é uma disciplina cientÃfica que engloba à reas como: processamento
de sinal, inteligência artificial, matemática, teoria de controlo, neurobiologia e fÃsica.
A plataforma de suporte ao estudo desenvolvido no âmbito desta dissertação é o ROTA (RObô Triciclo Autónomo) e todos os elementos que consistem
o seu ambiente. No contexto deste, são descritas abordagens que foram introduzidas com fim de desenvolver soluções para todos os desafios que o
robô enfrenta no seu ambiente: detecção de linhas de estrada e consequente percepção desta, detecção de obstáculos, semáforos, zona da passadeira e zona de obras. É também descrito um sistema de calibração e aplicação da remoção da perspectiva da imagem, desenvolvido de modo a mapear os elementos percepcionados em distâncias reais. Em consequência do sistema
de percepção, é ainda abordado o desenvolvimento de auto-localização integrado
numa arquitectura distribuÃda incluindo navegação com planeamento inteligente. Todo o trabalho desenvolvido no decurso da dissertação é essencialmente centrado no desenvolvimento de percepção robótica no contexto de condução autónoma.Computer vision and sensor fusion are subjects that are quite recent, however widely adopted in the development of autonomous robots that require
adaptability to their surrounding environment. This thesis gives an approach on both in order to achieve perception in the scope of autonomous driving.
The use of cameras to achieve this goal is a rather complex subject.
Unlike the classic sensorial devices that provide the same type of information with precision and achieve this in a deterministic way, the successive
images acquired by a camera are replete with the most varied information, that this ambiguous and extremely dificult to extract. The use of cameras
for robotic sensing is the closest we got within the similarities with what is of most importance in the process of human perception, the vision system. Computer vision is a scientific discipline that encompasses areas such as signal processing, artificial intelligence, mathematics, control theory,
neurobiology and physics.
The support platform in which the study within this thesis was developed, includes ROTA (RObô Triciclo Autónomo) and all elements comprising its
environment. In its context, are described approaches that introduced in the platform in order to develop solutions for all the challenges facing the robot in its environment: detection of lane markings and its consequent perception, obstacle detection, trafic lights, crosswalk and road maintenance area. It is also described a calibration system and implementation for the removal of the image perspective, developed in order to map the
elements perceived in actual real world distances. As a result of the perception system development, it is also addressed self-localization integrated in
a distributed architecture that allows navigation with long term planning.
All the work developed in the course of this work is essentially focused on robotic perception in the context of autonomous driving
Vision-based localization methods under GPS-denied conditions
This paper reviews vision-based localization methods in GPS-denied
environments and classifies the mainstream methods into Relative Vision
Localization (RVL) and Absolute Vision Localization (AVL). For RVL, we discuss
the broad application of optical flow in feature extraction-based Visual
Odometry (VO) solutions and introduce advanced optical flow estimation methods.
For AVL, we review recent advances in Visual Simultaneous Localization and
Mapping (VSLAM) techniques, from optimization-based methods to Extended Kalman
Filter (EKF) based methods. We also introduce the application of offline map
registration and lane vision detection schemes to achieve Absolute Visual
Localization. This paper compares the performance and applications of
mainstream methods for visual localization and provides suggestions for future
studies.Comment: 32 pages, 15 figure
Overview of Environment Perception for Intelligent Vehicles
This paper presents a comprehensive literature review on environment perception for intelligent vehicles. The
state-of-the-art algorithms and modeling methods for intelligent
vehicles are given, with a summary of their pros and cons. A
special attention is paid to methods for lane and road detection,
traffic sign recognition, vehicle tracking, behavior analysis, and
scene understanding. In addition, we provide information about
datasets, common performance analysis, and perspectives on
future research directions in this area
Vehicle Localization Based on Visual Lane Marking and Topological Map Matching
International audienceAccurate and reliable localization is crucial to autonomous vehicle navigation and driver assistance systems. This paper presents a novel approach for online vehicle localization in a digital map. Two distinct map matching algorithms are proposed: i) Iterative Closest Point (ICP) based lane level map matching is performed with visual lane tracker and grid map ii) decision-rule based approach is used to perform topological map matching. Results of both the map matching algorithms are fused together with GPS and dead reckoning using Extended Kalman Filter to estimate vehicle's pose relative to the map. The proposed approach has been validated on real life conditions on an equipped vehicle. Detailed analysis of the experimental results show improved localization using the two aforementioned map matching algorithm
- …