405 research outputs found

    Overview of Environment Perception for Intelligent Vehicles

    Get PDF
    This paper presents a comprehensive literature review on environment perception for intelligent vehicles. The state-of-the-art algorithms and modeling methods for intelligent vehicles are given, with a summary of their pros and cons. A special attention is paid to methods for lane and road detection, traffic sign recognition, vehicle tracking, behavior analysis, and scene understanding. In addition, we provide information about datasets, common performance analysis, and perspectives on future research directions in this area

    Teaching a Robot to Drive - A Skill Learning Inspired Approach

    Get PDF
    Roboter können unser Leben erleichtern, indem sie für uns unangenehme, oder sogar gefährliche Aufgaben übernehmen. Um sie effizient einsetzen zu können, sollten sie autonom, adaptiv und einfach zu instruieren sein. Traditionelle 'white-box'-Ansätze in der Robotik basieren auf dem Verständnis des Ingenieurs der unterliegenden physikalischen Struktur des gegebenen Problems. Ausgehend von diesem Verständnis kann der Ingenieur eine mögliche Lösung finden und es in dem System implementieren. Dieser Ansatz ist sehr mächtig, aber gleichwohl limitiert. Der wichtigste Nachteil ist, dass derart erstellte Systeme von vordefiniertem Wissen abhängen und deswegen jedes neue Verhalten den gleichen, teuren Entwicklungszyklus benötigt. Im Gegensatz dazu sind Menschen und einige andere Tiere nicht auf ihre angeborene Verhalten beschränkt, sondern können während ihrer Lebenszeit vielzählige weitere Fähigkeiten erwerben. Zusätzlich scheinen sie dazu kein detailliertes Wissen über den (physikalische) Ablauf einer gegebenen Aufgabe zu benötigen. Diese Eigenschaften sind auch für künstliche Systeme wünschenswert. Deswegen untersuchen wir in dieser Dissertation die Hypothese, dass Prinzipien des menschlichen Fähigkeitslernens zu alternativen Methoden für adaptive Systemkontrolle führen können. Wir untersuchen diese Hypothese anhand der Aufgabe des Autonomen Fahrens, welche ein klassiches Problem der Systemkontrolle darstellt und die Möglichkeit für vielfältige Applikationen bietet. Die genaue Aufgabe ist das Erlernen eines grundlegenden, antizipatorischen Fahrverhaltens von einem menschlichem Lehrer. Nachdem wir relevante Aspekte bezüglich des menschlichen Fähigkeitslernen aufgezeigt haben, und die Begriffe 'interne Modelle' und 'chunking' eingeführt haben, beschreiben wir die Anwendung dieser auf die gegebene Aufgabe. Wir realisieren chunking mit Hilfe einer Datenbank in welcher Beispiele menschlichen Fahreverhaltens gespeichert werden und mit Beschreibungen der visuell erfassten Strassentrajektorie verknüpft werden. Dies wird zunächst innerhalb einer Laborumgebung mit Hilfe eines Roboters verwirklicht und später, im Laufe des Europäischen DRIVSCO Projektes, auf ein echtes Auto übertragen. Wir untersuchen ausserdem das Erlernen visueller 'Vorwärtsmodelle', welche zu den internen Modellen gehören, sowie ihren Effekt auf die Kontrollperformanz beim Roboter. Das Hauptresultat dieser interdisziplinären und anwendungsorientierten Arbeit ist ein System, welches in der Lage ist als Antwort auf die visuell wahrgenommene Strassentrajektorie entsprechende Aktionspläne zu generieren, ohne das dazu metrische Informationen benötigt werden. Die vorhergesagten Aktionen in der Laborumgebung sind Lenken und Geschwindigkeit. Für das echte Auto Lenken und Beschleunigung, wobei die prediktive Kapazität des Systems für Letzteres beschränkt ist. D.h. der Roboter lernt autonomes Fahren von einem menschlichen Lehrer und das Auto lernt die Vorhersage menschlichen Fahrverhaltens. Letzteres wurde während der Begutachtung des Projektes duch ein internationales Expertenteam erfolgreich demonstriert. Das Ergebnis dieser Arbeit ist relevant für Anwendungen in der Roboterkontrolle und dabei besonders in dem Bereich intelligenter Fahrerassistenzsysteme

    무인자율주행을 위한 도로 지도 생성 및 측위

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2016. 2. 서승우.This dissertation aims to present precise and cost-efficient mapping and localization algorithms for autonomous vehicles. Mapping and localization are ones of the key components in autonomous vehicles. The major concern for mapping and localization research is maximizing the accuracy and precision of the systems while minimizing the cost. For this goal, this dissertation proposes a road map generation system to create a precise and efficient lane-level road map, and a localization system based on the proposed road map and affordable sensors. In chapter 2, the road map generation system is presented. The road map generation system integrates a 3D LIDAR data and high-precision vehicle positioning system to acquire accurate road geometry data. Acquired road geometry data is represented as sets of piecewise polynomial curves in order to increase the storage efficiency and the usability. From extensive experiments using a real urban and highway road data, it is verified that the proposed road map generation system generates a road map that is accurate and more efficient than previous road maps in terms of the storage efficiency and usability. In chapter 3, the localization system is presented. The localization system targets an environment that the localization is difficult due to the lack of feature information for localization. The proposed system integrates the lane-level road map presented in chapter 2, and various low-cost sensors for accurate and cost-effective vehicle localization. A measurement ambiguity problem due to the use of low-cost sensors and poor feature information was presented, and a probabilistic measurement association-based particle filter is proposed to resolve the measurement ambiguity problem. Experimental results using a real highway road data is presented to verify the accuracy and reliability of the localization system. In chapter 4, an application of the accurate vehicle localization system is presented. It is demonstrated that sharing of accurate position information among vehicles can improve the traffic flow and suppress the traffic jam effectively. The effect of the position information sharing is evaluated based on numerical experiments. For this, a traffic model is proposed by extending conventional SOV traffic model. The numerical experiments show that the traffic flow is increased based on accurate vehicle localization and information sharing among vehicles.Chapter 1 Introduction 1 1.1 Background andMotivations 1 1.2 Contributions and Outline of the Dissertation 3 1.2.1 Generation of a Precise and Efficient Lane-Level Road Map 3 1.2.2 Accurate and Cost-Effective Vehicle Localization in Featureless Environments 4 1.2.3 An Application of Precise Vehicle Localization: Traffic Flow Enhancement Through the Sharing of Accurate Position Information Among Vehicles 4 Chapter 2 Generation of a Precise and Efficient Lane-Level Road Map 6 2.1 RelatedWorks 9 2.1.1 Acquisition of Road Geometry 11 2.1.2 Modeling of Road Geometry 13 2.2 Overall System Architecture 15 2.3 Road Geometry Data Acquisition and Processing 17 2.3.1 Data Acquisition 18 2.3.2 Data Processing 18 2.3.3 Outlier Problem 26 2.4 RoadModeling 27 2.4.1 Overview of the sequential approximation algorithm 29 2.4.2 Approximation Process 30 2.4.3 Curve Transition 35 2.4.4 Arc length parameterization 38 2.5 Experimental Validation 39 2.5.1 Experimental Setup 39 2.5.2 Data Acquisition and Processing 40 2.5.3 RoadModeling 42 2.6 Summary 49 Chapter 3 Accurate and Cost-Effective Vehicle Localization in Featureless Environments 51 3.1 RelatedWorks 53 3.2 SystemOverview 57 3.2.1 Test Vehicle and Sensor Configuration 57 3.2.2 Augmented RoadMap Data 57 3.2.3 Vehicle Localization SystemArchitecture 61 3.2.4 ProblemStatement 62 3.3 Particle filter-based Vehicle Localization Algorithm 63 3.3.1 Initialization 65 3.3.2 Time Update 66 3.3.3 Measurement Update 66 3.3.4 Integration 68 3.3.5 State Estimation 68 3.3.6 Resampling 69 3.4 Map-Image Measurement Update with Probabilistic Data Association 69 3.4.1 Lane Marking Extraction and Measurement Error Model 70 3.5 Experimental Validation 76 3.5.1 Experimental Environments 76 3.5.2 Localization Accuracy 77 3.5.3 Effect of the Probabilistic Measurement Association 79 3.5.4 Effect of theMeasurement ErrorModel 80 3.6 Summary 80 Chapter 4 An Application of Precise Vehicle Localization: Traffic Flow Enhancement Through the Sharing of Accurate Position Information Among Vehicles 82 4.1 Extended SOVModel 84 4.1.1 SOVModel 85 4.1.2 Extended SOVModel 89 4.2 Results and Discussions 91 4.3 Summary 93 Chapter 5 Conclusion 95 Bibliography 97 국문 초록 108Docto

    Vision-based ego-lane analysis system : dataset and algorithms

    Get PDF
    A detecção e análise da faixa de trânsito são tarefas importantes e desafiadoras em sistemas avançados de assistência ao motorista e direção autônoma. Essas tarefas são necessárias para auxiliar veículos autônomos e semi-autônomos a operarem com segurança. A queda no custo dos sensores de visão e os avanços em hardware embarcado impulsionaram as pesquisas relacionadas a faixa de trânsito –detecção, estimativa, rastreamento, etc. – nas últimas duas décadas. O interesse nesse tópico aumentou ainda mais com a demanda por sistemas avançados de assistência ao motorista (ADAS) e carros autônomos. Embora amplamente estudado de forma independente, ainda há necessidade de estudos que propõem uma solução combinada para os vários problemas relacionados a faixa do veículo, tal como aviso de saída de faixa (LDW), detecção de troca de faixa, classificação do tipo de linhas de divisão de fluxo (LMT), detecção e classificação de inscrições no pavimento, e detecção da presença de faixas ajdacentes. Esse trabalho propõe um sistema de análise da faixa do veículo (ELAS) em tempo real capaz de estimar a posição da faixa do veículo, classificar as linhas de divisão de fluxo e inscrições na faixa, realizar aviso de saída de faixa e detectar eventos de troca de faixa. O sistema proposto, baseado em visão, funciona em uma sequência temporal de imagens. Características das marcações de faixa são extraídas tanto na perspectiva original quanto em images mapeadas para a vista aérea, que então são combinadas para aumentar a robustez. A estimativa final da faixa é modelada como uma spline usando uma combinação de métodos (linhas de Hough, filtro de Kalman e filtro de partículas). Baseado na faixa estimada, todos os outros eventos são detectados. Além disso, o sistema proposto foi integrado para experimentação em um sistema para carros autônomos que está sendo desenvolvido pelo Laboratório de Computação de Alto Desempenho (LCAD) da Universidade Federal do Espírito Santo (UFES). Para validar os algorítmos propostos e cobrir a falta de base de dados para essas tarefas na literatura, uma nova base dados com mais de 20 cenas diferentes (com mais de 15.000 imagens) e considerando uma variedade de cenários (estrada urbana, rodovias, tráfego, sombras, etc.) foi criada. Essa base de dados foi manualmente anotada e disponilizada publicamente para possibilitar a avaliação de diversos eventos que são de interesse para a comunidade de pesquisa (i.e. estimativa, mudança e centralização da faixa; inscrições no pavimento; cruzamentos; tipos de linhas de divisão de fluxo; faixas de pedestre e faixas adjacentes). Além disso, o sistema também foi validado qualitativamente com base na integração com o veículo autônomo. O sistema alcançou altas taxas de detecção em todos os eventos do mundo real e provou estar pronto para aplicações em tempo real.Lane detection and analysis are important and challenging tasks in advanced driver assistance systems and autonomous driving. These tasks are required in order to help autonomous and semi-autonomous vehicles to operate safely. Decreasing costs of vision sensors and advances in embedded hardware boosted lane related research – detection, estimation, tracking, etc. – in the past two decades. The interest in this topic has increased even more with the demand for advanced driver assistance systems (ADAS) and self-driving cars. Although extensively studied independently, there is still need for studies that propose a combined solution for the multiple problems related to the ego-lane, such as lane departure warning (LDW), lane change detection, lane marking type (LMT) classification, road markings detection and classification, and detection of adjacent lanes presence. This work proposes a real-time Ego-Lane Analysis System (ELAS) capable of estimating ego-lane position, classifying LMTs and road markings, performing LDW and detecting lane change events. The proposed vision-based system works on a temporal sequence of images. Lane marking features are extracted in perspective and Inverse Perspective Mapping (IPM) images that are combined to increase robustness. The final estimated lane is modeled as a spline using a combination of methods (Hough lines, Kalman filter and Particle filter). Based on the estimated lane, all other events are detected. Moreover, the proposed system was integrated for experimentation into an autonomous car that is being developed by the High Performance Computing Laboratory of the Universidade Federal do Espírito Santo. To validate the proposed algorithms and cover the lack of lane datasets in the literature, a new dataset with more than 20 different scenes (in more than 15,000 frames) and considering a variety of scenarios (urban road, highways, traffic, shadows, etc.) was created. The dataset was manually annotated and made publicly available to enable evaluation of several events that are of interest for the research community (i.e. lane estimation, change, and centering; road markings; intersections; LMTs; crosswalks and adjacent lanes). Furthermore, the system was also validated qualitatively based on the integration with the autonomous vehicle. ELAS achieved high detection rates in all real-world events and proved to be ready for real-time applications.FAPE

    Lane estimation for autonomous vehicles using vision and LIDAR

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 109-114).Autonomous ground vehicles, or self-driving cars, require a high level of situational awareness in order to operate safely and eciently in real-world conditions. A system able to quickly and reliably estimate the location of the roadway and its lanes based upon local sensor data would be a valuable asset both to fully autonomous vehicles as well as driver assistance technologies. To be most useful, the system must accommodate a variety of roadways, a range of weather and lighting conditions, and highly dynamic scenes with other vehicles and moving objects. Lane estimation can be modeled as a curve estimation problem, where sensor data provides partial and noisy observations of curves. The number of curves to estimate may be initially unknown and many of the observations may be outliers and false detections (e.g., due to tree shadows or lens are). The challenge is to detect lanes when and where they exist, and to update the lane estimates as new observations are received. This thesis describes algorithms for feature detection and curve estimation, as well as a novel curve representation that permits fast and ecient estimation while rejecting outliers. Locally observed road paint and curb features are fused together in a lane estimation framework that detects and estimates all nearby travel lanes.(cont.) The system handles roads with complex geometries and makes no assumptions about the position and orientation of the vehicle with respect to the roadway. Early versions of these algorithms successfully guided a fully autonomous Land Rover LR3 through the 2007 DARPA Urban Challenge, a 90km urban race course, at speeds up to 40 km/h amidst moving traffic. We evaluate these and subsequent versions with a ground truth dataset containing manually labeled lane geometries for every moment of vehicle travel in two large and diverse datasets that include more than 300,000 images and 44km of roadway. The results illustrate the capabilities of our algorithms for robust lane estimation in the face of challenging conditions and unknown roadways.by Albert S. Huang.Ph.D

    Steering Angle Prediction Techniques for Autonomous Ground Vehicles: A Review

    Get PDF
    Unintentional lane departure accidents are one of the biggest reasons for the causalities that occur due to human errors. By incorporating lane-keeping features in vehicles, many accidents can be avoided. The lane-keeping system operates by auto-steering the vehicle in order to keep it within the desired lane, despite of changes in road conditions and other interferences. Accurate steering angle prediction is crucial to keep the vehicle within the road boundaries, which is a challenging task. The main difficulty in this regard is to identify the drivable road area on heterogeneous road types varying in color, texture, illumination conditions, and lane marking types. This strenuous problem can be addressed by two approaches, namely, 'computer-vision-based approach' and 'imitation-learning-based approach'. To the best of our knowledge, at present, there is no such detailed review study covering both the approaches and their related optimization techniques. This comprehensive review attempts to provide a clear picture of both approaches of steering angle prediction in the form of step by step procedures. The taxonomy of steering angle prediction has been presented in the paper for a better comprehension of the problem. We have also discussed open research problems at the end of the paper to help the researchers of this area to discover new research horizons

    Ground Vehicle Platooning Control and Sensing in an Adversarial Environment

    Get PDF
    The highways of the world are growing more congested. People are inherently bad drivers from a safety and system reliability perspective. Self-driving cars are one solution to this problem, as automation can remove human error and react consistently to unexpected events. Automated vehicles have been touted as a potential solution to improving highway utilization and increasing the safety of people on the roads. Automated vehicles have proven to be capable of interacting safely with human drivers, but the technology is still new. This means that there are points of failure that have not been discovered yet. The focus of this work is to provide a platform to evaluate the security and reliability of automated ground vehicles in an adversarial environment. An existing system was already in place, but it was limited to longitudinal control, relying on a steel cable to keep the vehicle on track. The upgraded platform was developed with computer vision to drive the vehicle around a track in order to facilitate an extended attack. Sensing and control methods for the platform are proposed to provide a baseline for the experimental platform. Vehicle control depends on extensive sensor systems to determine the vehicle position relative to its surroundings. A potential attack on a vehicle could be performed by jamming the sensors necessary to reliably control the vehicle. A method to extend the sensing utility of a camera is proposed as a countermeasure against a sensor jamming attack. A monocular camera can be used to determine the bearing to a target, and this work extends the sensor capabilities to estimate the distance to the target. This provides a redundant sensor if the standard distance sensor of a vehicle is compromised by a malicious agent. For a 320×200 pixel camera, the distance estimation is accurate between 0.5 and 3 m. One previously discovered vulnerability of automated highway systems is that vehicles can coordinate an attack to induce traffic jams and collisions. The effects of this attack on a vehicle system with mixed human and automated vehicles are analyzed. The insertion of human drivers into the system stabilizes the traffic jam at the cost of highway utilization
    corecore