260 research outputs found

    Overview of Environment Perception for Intelligent Vehicles

    Get PDF
    This paper presents a comprehensive literature review on environment perception for intelligent vehicles. The state-of-the-art algorithms and modeling methods for intelligent vehicles are given, with a summary of their pros and cons. A special attention is paid to methods for lane and road detection, traffic sign recognition, vehicle tracking, behavior analysis, and scene understanding. In addition, we provide information about datasets, common performance analysis, and perspectives on future research directions in this area

    Advances in vision-based lane detection: algorithms, integration, assessment, and perspectives on ACP-based parallel vision

    Get PDF
    Lane detection is a fundamental aspect of most current advanced driver assistance systems (ADASs). A large number of existing results focus on the study of vision-based lane detection methods due to the extensive knowledge background and the low-cost of camera devices. In this paper, previous vision-based lane detection studies are reviewed in terms of three aspects, which are lane detection algorithms, integration, and evaluation methods. Next, considering the inevitable limitations that exist in the camera-based lane detection system, the system integration methodologies for constructing more robust detection systems are reviewed and analyzed. The integration methods are further divided into three levels, namely, algorithm, system, and sensor. Algorithm level combines different lane detection algorithms while system level integrates other object detection systems to comprehensively detect lane positions. Sensor level uses multi-modal sensors to build a robust lane recognition system. In view of the complexity of evaluating the detection system, and the lack of common evaluation procedure and uniform metrics in past studies, the existing evaluation methods and metrics are analyzed and classified to propose a better evaluation of the lane detection system. Next, a comparison of representative studies is performed. Finally, a discussion on the limitations of current lane detection systems and the future developing trends toward an Artificial Society, Computational experiment-based parallel lane detection framework is proposed

    Road Detection and Recognition from Monocular Images Using Neural Networks

    Get PDF
    Teede eristamine on oluline osa iseseisvatest navigatsioonisüsteemidest, mis aitavad robotitel ja autonoomsetel sõidukitel maapinnal liikuda. See on kasutusel erinevates seotud alamülesannetes, näiteks võimalike valiidsete liikumisteede leidmisel, takistusega kokkupõrke vältimisel ja teel asuvate objektide avastamisel.Selle töö eesmärk on uurida eksisteerivaid teede tuvastamise ja eristamise võtteid ning pakkuda välja alternatiivne lahendus selle teostamiseks.Töö jaoks loodi 5300-pildine andmestik ilma lisainfota teepiltidest. Lisaks tehti kokkuvõte juba eksisteerivatest teepiltide andmestikest. Töös pakume erinevates keskkondades asuvate teede piltide klassifitseerimiseks välja LeNet-5’l põhineva tehisnärvivõrgu. Samuti esitleme FCN-8’l põhinevat mudelit pikslipõhiseks pildituvastuseks.Road recognition is one of the important aspects in Autonomous Navigation Systems. These systems help to navigate the autonomous vehicle and robot on the ground. Further, road detection is useful in related sub-tasks such as finding valid road path where the robot/vehicle can go, for supportive driverless vehicles, preventing the collision with the obstacle, object detection on the road, and others.The goal of this thesis is to examine existing road detection and recognition techniques and propose an alternative solution for road classification and detection task.Our contribution consists of several parts. Firstly, we released the road images dataset with approximately 5,300 unlabeled road images. Secondly, we summarized the information about the existing road images datasets. Thirdly, we proposed the convolutional LeNet-5-based neural network for the road image classification for various environments. Finally, our FCN-8-based model for pixel-wise image recognition has been presented

    Road terrain detection for Advanced Driver Assistance Systems

    Get PDF
    Kühnl T. Road terrain detection for Advanced Driver Assistance Systems. Bielefeld: Bielefeld University; 2013

    A new real-time method for finding temporary and permanent road marking and its applications

    Get PDF
    In this paper, a new real-time method for finding temporary and permanent road marking is proposed. The method is based on the geometrized histograms method for segmenting and describing color images. It is able to work with both rectilinear and curvilinear marking, as well as with color temporary and permanent road marking. The developed method is stable under illumination and is able to work even for partially disappearing road marking, typical for late winter and early spring. The proposed method has been implemented by a program written in C++, operating under Windows and Linux. The program operation has been tested on video records shot on typical Russian roads. The processing speed is about 20 fps for a standard modern computer. The results of program operation are presented and discussed. The developed program is a part of the video system of the AvtoNiva pilotless vehicle.This work was partially supported by the Russian Foundation for Basic Research, projects no. 16-08-00880, 18-07-00127, and 19-08-01159

    Real-time lane detection on suburban streets using visual cue integration

    Full text link
    The detection of lane boundaries on suburban streets using images obtained from video constitutes a challenging task. This is mainly due to the difficulties associated with estimating the complex geometric structure of lane boundaries, the quality of lane markings as a result of wear, occlusions by traffic, and shadows caused by road-side trees and structures. Most of the existing techniques for lane boundary detection employ a single visual cue and will only work under certain conditions and where there are clear lane markings. Also, better results are achieved when there are no other onroad objects present. This paper extends our previous work and discusses a novel lane boundary detection algorithm specifically addressing the abovementioned issues through the integration of two visual cues. The first visual cue is based on stripe-like features found on lane lines extracted using a two-dimensional symmetric Gabor filter. The second visual cue is based on a texture characteristic determined using the entropy measure of the predefined neighbourhood around a lane boundary line. The visual cues are then integrated using a rulebased classifier which incorporates a modified sequential covering algorithm to improve robustness. To separate lane boundary lines from other similar features, a road mask is generated using road chromaticity values estimated from CIE L*a*b* colour transformation. Extraneous points around lane boundary lines are then removed by an outlier removal procedure based on studentized residuals. The lane boundary lines are then modelled with Bezier spline curves. To validate the algorithm, extensive experimental evaluation was carried out on suburban streets and the results are presented.&nbsp;<br /

    Vision-based ego-lane analysis system : dataset and algorithms

    Get PDF
    A detecção e análise da faixa de trânsito são tarefas importantes e desafiadoras em sistemas avançados de assistência ao motorista e direção autônoma. Essas tarefas são necessárias para auxiliar veículos autônomos e semi-autônomos a operarem com segurança. A queda no custo dos sensores de visão e os avanços em hardware embarcado impulsionaram as pesquisas relacionadas a faixa de trânsito –detecção, estimativa, rastreamento, etc. – nas últimas duas décadas. O interesse nesse tópico aumentou ainda mais com a demanda por sistemas avançados de assistência ao motorista (ADAS) e carros autônomos. Embora amplamente estudado de forma independente, ainda há necessidade de estudos que propõem uma solução combinada para os vários problemas relacionados a faixa do veículo, tal como aviso de saída de faixa (LDW), detecção de troca de faixa, classificação do tipo de linhas de divisão de fluxo (LMT), detecção e classificação de inscrições no pavimento, e detecção da presença de faixas ajdacentes. Esse trabalho propõe um sistema de análise da faixa do veículo (ELAS) em tempo real capaz de estimar a posição da faixa do veículo, classificar as linhas de divisão de fluxo e inscrições na faixa, realizar aviso de saída de faixa e detectar eventos de troca de faixa. O sistema proposto, baseado em visão, funciona em uma sequência temporal de imagens. Características das marcações de faixa são extraídas tanto na perspectiva original quanto em images mapeadas para a vista aérea, que então são combinadas para aumentar a robustez. A estimativa final da faixa é modelada como uma spline usando uma combinação de métodos (linhas de Hough, filtro de Kalman e filtro de partículas). Baseado na faixa estimada, todos os outros eventos são detectados. Além disso, o sistema proposto foi integrado para experimentação em um sistema para carros autônomos que está sendo desenvolvido pelo Laboratório de Computação de Alto Desempenho (LCAD) da Universidade Federal do Espírito Santo (UFES). Para validar os algorítmos propostos e cobrir a falta de base de dados para essas tarefas na literatura, uma nova base dados com mais de 20 cenas diferentes (com mais de 15.000 imagens) e considerando uma variedade de cenários (estrada urbana, rodovias, tráfego, sombras, etc.) foi criada. Essa base de dados foi manualmente anotada e disponilizada publicamente para possibilitar a avaliação de diversos eventos que são de interesse para a comunidade de pesquisa (i.e. estimativa, mudança e centralização da faixa; inscrições no pavimento; cruzamentos; tipos de linhas de divisão de fluxo; faixas de pedestre e faixas adjacentes). Além disso, o sistema também foi validado qualitativamente com base na integração com o veículo autônomo. O sistema alcançou altas taxas de detecção em todos os eventos do mundo real e provou estar pronto para aplicações em tempo real.Lane detection and analysis are important and challenging tasks in advanced driver assistance systems and autonomous driving. These tasks are required in order to help autonomous and semi-autonomous vehicles to operate safely. Decreasing costs of vision sensors and advances in embedded hardware boosted lane related research – detection, estimation, tracking, etc. – in the past two decades. The interest in this topic has increased even more with the demand for advanced driver assistance systems (ADAS) and self-driving cars. Although extensively studied independently, there is still need for studies that propose a combined solution for the multiple problems related to the ego-lane, such as lane departure warning (LDW), lane change detection, lane marking type (LMT) classification, road markings detection and classification, and detection of adjacent lanes presence. This work proposes a real-time Ego-Lane Analysis System (ELAS) capable of estimating ego-lane position, classifying LMTs and road markings, performing LDW and detecting lane change events. The proposed vision-based system works on a temporal sequence of images. Lane marking features are extracted in perspective and Inverse Perspective Mapping (IPM) images that are combined to increase robustness. The final estimated lane is modeled as a spline using a combination of methods (Hough lines, Kalman filter and Particle filter). Based on the estimated lane, all other events are detected. Moreover, the proposed system was integrated for experimentation into an autonomous car that is being developed by the High Performance Computing Laboratory of the Universidade Federal do Espírito Santo. To validate the proposed algorithms and cover the lack of lane datasets in the literature, a new dataset with more than 20 different scenes (in more than 15,000 frames) and considering a variety of scenarios (urban road, highways, traffic, shadows, etc.) was created. The dataset was manually annotated and made publicly available to enable evaluation of several events that are of interest for the research community (i.e. lane estimation, change, and centering; road markings; intersections; LMTs; crosswalks and adjacent lanes). Furthermore, the system was also validated qualitatively based on the integration with the autonomous vehicle. ELAS achieved high detection rates in all real-world events and proved to be ready for real-time applications.FAPE
    corecore