176 research outputs found

    Lane detection by orientation and length discrimination

    Get PDF
    This paper describes a novel lane detection algorithm for visual traffic surveillance applications under the auspice of intelligent transportation systems. Traditional lane detection methods for vehicle navigation typically use spatial masks to isolate instantaneous lane information from on-vehicle camera images. When surveillance is concerned, complete lane and multiple lane information is essential for tracking vehicles and monitoring lane change frequency from overhead cameras, where traditional methods become inadequate. The algorithm presented in this paper extracts complete multiple lane information by utilizing prominent orientation and length features of lane markings and curb structures to discriminate against other minor features. Essentially, edges are first extracted from the background of a traffic sequence, then thinned and approximated by straight lines. From the resulting set of straight lines, orientation and length discriminations are carried out three-dimensionally with the aid of two-dimensional (2-D) to three-dimensional (3-D) coordinate transformation and K-means clustering. By doing so, edges with strong orientation and length affinity are retained and clustered, while short and isolated edges are eliminated. Overall, the merits of this algorithm are as follows. First, it works well under practical visual surveillance conditions. Second, using K-means for clustering offers a robust approach. Third, the algorithm is efficient as it only requires one image frame to determine the road center lines. Fourth, it computes multiple lane information simultaneously. Fifth, the center lines determined are accurate enough for the intended application.published_or_final_versio

    The perception system of intelligent ground vehicles in all weather conditions: A systematic literature review

    Get PDF
    Perception is a vital part of driving. Every year, the loss in visibility due to snow, fog, and rain causes serious accidents worldwide. Therefore, it is important to be aware of the impact of weather conditions on perception performance while driving on highways and urban traffic in all weather conditions. The goal of this paper is to provide a survey of sensing technologies used to detect the surrounding environment and obstacles during driving maneuvers in different weather conditions. Firstly, some important historical milestones are presented. Secondly, the state-of-the-art automated driving applications (adaptive cruise control, pedestrian collision avoidance, etc.) are introduced with a focus on all-weather activity. Thirdly, the most involved sensor technologies (radar, lidar, ultrasonic, camera, and far-infrared) employed by automated driving applications are studied. Furthermore, the difference between the current and expected states of performance is determined by the use of spider charts. As a result, a fusion perspective is proposed that can fill gaps and increase the robustness of the perception system

    Driver Behavior Analysis Based on Real On-Road Driving Data in the Design of Advanced Driving Assistance Systems

    Get PDF
    The number of vehicles on the roads increases every day. According to the National Highway Traffic Safety Administration (NHTSA), the overwhelming majority of serious crashes (over 94 percent) are caused by human error. The broad aim of this research is to develop a driver behavior model using real on-road data in the design of Advanced Driving Assistance Systems (ADASs). For several decades, these systems have been a focus of many researchers and vehicle manufacturers in order to increase vehicle and road safety and assist drivers in different driving situations. Some studies have concentrated on drivers as the main actor in most driving circumstances. The way a driver monitors the traffic environment partially indicates the level of driver awareness. As an objective, we carry out a quantitative and qualitative analysis of driver behavior to identify the relationship between a driver’s intention and his/her actions. The RoadLAB project developed an instrumented vehicle equipped with On-Board Diagnostic systems (OBD-II), a stereo imaging system, and a non-contact eye tracker system to record some synchronized driving data of the driver cephalo-ocular behavior, the vehicle itself, and traffic environment. We analyze several behavioral features of the drivers to realize the potential relevant relationship between driver behavior and the anticipation of the next driver maneuver as well as to reach a better understanding of driver behavior while in the act of driving. Moreover, we detect and classify road lanes in the urban and suburban areas as they provide contextual information. Our experimental results show that our proposed models reached the F1 score of 84% and the accuracy of 94% for driver maneuver prediction and lane type classification respectively

    Bimodal sound source tracking applied to road traffic monitoring

    Get PDF
    The constant increase of road traffic requires closer and closer road network monitoring. The awareness of traffic characteristics in real time as well as its historical trends, facilitates decision-making for flow regulation, triggering relief operations, ensuring the motorists’ safety and contribute to optimize transport infrastructures. Today, the heterogeneity of the available data makes their processing complex and expensive (multiple sensors with different technologies, placed in different locations, with their own data format, unsynchronized, etc.). This leads metrologists to develop “smarter” monitoring devices, i.e. capable of providing all the necessary data synchronized from a single measurement point, with no impact on the flow road itself and ideally without complex installation. This work contributes to achieve such an objective through the development of a passive, compact, non-intrusive, acoustic-based system composed of a microphone array with a few number of elements placed on the roadside. The proposed signal processing techniques enable vehicle detection, the estimation of their speed as well as the estimation of their wheelbase length as they pass by. Sound sources emitted by tyre/road interactions are localized using generalized cross-correlation functions between sensor pairs. These successive correlation measurements are filtered using a sequential Monte Carlo method (particle filter) enabling, on one hand, the simultaneous tracking of multiple vehicles (that follow or pass each other) and on the other hand, a discrimination between useful sound sources and interfering noises. This document focuses on two-axle road vehicles only. The two tyre/road interactions (front and rear) observed by a microphone array on the roadside are modeled as two stochastic, zero-mean and uncorrelated processes, spatially disjoint by the wheelbase length. This bimodal sound source model defines a specific particle filter, called bimodal particle filter, which is presented here. Compared to the classical (unimodal) particle filter, a better robustness for speed estimation is achieved especially in cases of harsh observation. Moreover the proposed algorithm enables the wheelbase length estimation through purely passive acoustic measurement. An innovative microphone array design methodology, based on a mathematical expression of the observation and the tracking methodology itself is also presented. The developed algorithms are validated and assessed through in-situ measurements. Estimates provided by the acoustical signal processing are compared with standard radar measurements and confronted to video monitoring images. Although presented in a purely road-related applied context, we feel that the developed methodologies can be, at least partly, applied to rail, aerial, underwater or industrial metrology

    Advances in Automated Driving Systems

    Get PDF
    Electrification, automation of vehicle control, digitalization and new mobility are the mega-trends in automotive engineering, and they are strongly connected. While many demonstrations for highly automated vehicles have been made worldwide, many challenges remain in bringing automated vehicles to the market for private and commercial use. The main challenges are as follows: reliable machine perception; accepted standards for vehicle-type approval and homologation; verification and validation of the functional safety, especially at SAE level 3+ systems; legal and ethical implications; acceptance of vehicle automation by occupants and society; interaction between automated and human-controlled vehicles in mixed traffic; human–machine interaction and usability; manipulation, misuse and cyber-security; the system costs of hard- and software and development efforts. This Special Issue was prepared in the years 2021 and 2022 and includes 15 papers with original research related to recent advances in the aforementioned challenges. The topics of this Special Issue cover: Machine perception for SAE L3+ driving automation; Trajectory planning and decision-making in complex traffic situations; X-by-Wire system components; Verification and validation of SAE L3+ systems; Misuse, manipulation and cybersecurity; Human–machine interactions, driver monitoring and driver-intention recognition; Road infrastructure measures for the introduction of SAE L3+ systems; Solutions for interactions between human- and machine-controlled vehicles in mixed traffic

    Intelligent Transportation Related Complex Systems and Sensors

    Get PDF
    Building around innovative services related to different modes of transport and traffic management, intelligent transport systems (ITS) are being widely adopted worldwide to improve the efficiency and safety of the transportation system. They enable users to be better informed and make safer, more coordinated, and smarter decisions on the use of transport networks. Current ITSs are complex systems, made up of several components/sub-systems characterized by time-dependent interactions among themselves. Some examples of these transportation-related complex systems include: road traffic sensors, autonomous/automated cars, smart cities, smart sensors, virtual sensors, traffic control systems, smart roads, logistics systems, smart mobility systems, and many others that are emerging from niche areas. The efficient operation of these complex systems requires: i) efficient solutions to the issues of sensors/actuators used to capture and control the physical parameters of these systems, as well as the quality of data collected from these systems; ii) tackling complexities using simulations and analytical modelling techniques; and iii) applying optimization techniques to improve the performance of these systems. It includes twenty-four papers, which cover scientific concepts, frameworks, architectures and various other ideas on analytics, trends and applications of transportation-related data

    Teaching a Robot to Drive - A Skill Learning Inspired Approach

    Get PDF
    Roboter können unser Leben erleichtern, indem sie für uns unangenehme, oder sogar gefährliche Aufgaben übernehmen. Um sie effizient einsetzen zu können, sollten sie autonom, adaptiv und einfach zu instruieren sein. Traditionelle 'white-box'-Ansätze in der Robotik basieren auf dem Verständnis des Ingenieurs der unterliegenden physikalischen Struktur des gegebenen Problems. Ausgehend von diesem Verständnis kann der Ingenieur eine mögliche Lösung finden und es in dem System implementieren. Dieser Ansatz ist sehr mächtig, aber gleichwohl limitiert. Der wichtigste Nachteil ist, dass derart erstellte Systeme von vordefiniertem Wissen abhängen und deswegen jedes neue Verhalten den gleichen, teuren Entwicklungszyklus benötigt. Im Gegensatz dazu sind Menschen und einige andere Tiere nicht auf ihre angeborene Verhalten beschränkt, sondern können während ihrer Lebenszeit vielzählige weitere Fähigkeiten erwerben. Zusätzlich scheinen sie dazu kein detailliertes Wissen über den (physikalische) Ablauf einer gegebenen Aufgabe zu benötigen. Diese Eigenschaften sind auch für künstliche Systeme wünschenswert. Deswegen untersuchen wir in dieser Dissertation die Hypothese, dass Prinzipien des menschlichen Fähigkeitslernens zu alternativen Methoden für adaptive Systemkontrolle führen können. Wir untersuchen diese Hypothese anhand der Aufgabe des Autonomen Fahrens, welche ein klassiches Problem der Systemkontrolle darstellt und die Möglichkeit für vielfältige Applikationen bietet. Die genaue Aufgabe ist das Erlernen eines grundlegenden, antizipatorischen Fahrverhaltens von einem menschlichem Lehrer. Nachdem wir relevante Aspekte bezüglich des menschlichen Fähigkeitslernen aufgezeigt haben, und die Begriffe 'interne Modelle' und 'chunking' eingeführt haben, beschreiben wir die Anwendung dieser auf die gegebene Aufgabe. Wir realisieren chunking mit Hilfe einer Datenbank in welcher Beispiele menschlichen Fahreverhaltens gespeichert werden und mit Beschreibungen der visuell erfassten Strassentrajektorie verknüpft werden. Dies wird zunächst innerhalb einer Laborumgebung mit Hilfe eines Roboters verwirklicht und später, im Laufe des Europäischen DRIVSCO Projektes, auf ein echtes Auto übertragen. Wir untersuchen ausserdem das Erlernen visueller 'Vorwärtsmodelle', welche zu den internen Modellen gehören, sowie ihren Effekt auf die Kontrollperformanz beim Roboter. Das Hauptresultat dieser interdisziplinären und anwendungsorientierten Arbeit ist ein System, welches in der Lage ist als Antwort auf die visuell wahrgenommene Strassentrajektorie entsprechende Aktionspläne zu generieren, ohne das dazu metrische Informationen benötigt werden. Die vorhergesagten Aktionen in der Laborumgebung sind Lenken und Geschwindigkeit. Für das echte Auto Lenken und Beschleunigung, wobei die prediktive Kapazität des Systems für Letzteres beschränkt ist. D.h. der Roboter lernt autonomes Fahren von einem menschlichen Lehrer und das Auto lernt die Vorhersage menschlichen Fahrverhaltens. Letzteres wurde während der Begutachtung des Projektes duch ein internationales Expertenteam erfolgreich demonstriert. Das Ergebnis dieser Arbeit ist relevant für Anwendungen in der Roboterkontrolle und dabei besonders in dem Bereich intelligenter Fahrerassistenzsysteme

    Sistema de detecção em tempo real de faixas de sinalização de trânsito para veículos inteligentes utilizando processamento de imagem

    Get PDF
    A mobilidade é uma marca da nossa civilização. Tanto o transporte de carga quanto o de passageiros compartilham de uma enorme infra-estrutura de conexões operados com o apoio de um sofisticado sistema logístico. Simbiose otimizada de módulos mecânicos e elétricos, os veículos evoluem continuamente com a integração de avanços tecnológicos e são projetados para oferecer o melhor em conforto, segurança, velocidade e economia. As regulamentações organizam o fluxo de transporte rodoviário e as suas interações, estipulando regras a fim de evitar conflitos. Mas a atividade de condução pode tornar-se estressante em diferentes condições, deixando os condutores humanos propensos a erros de julgamento e criando condições de acidente. Os esforços para reduzir acidentes de trânsito variam desde campanhas de re-educação até novas tecnologias. Esses tópicos têm atraído cada vez mais a atenção de pesquisadores e indústrias para Sistemas de Transporte Inteligentes baseados em imagens que visam a prevenção de acidentes e o auxilio ao seu motorista na interpretação das formas de sinalização urbana. Este trabalho apresenta um estudo sobre técnicas de detecção em tempo real de faixas de sinalização de trânsito em ambientes urbanos e intermunicipais, com objetivo de realçar as faixas de sinalização da pista para o condutor do veículo ou veículo autônomo, proporcionando um controle maior da área de tráfego destinada ao veículo e prover alertas de possíveis situações de risco. A principal contribuição deste trabalho é otimizar a formar como as técnicas de processamento de imagem são utilizas para realizar a extração das faixas de sinalização, com o objetivo de reduzir o custo computacional do sistema. Para realizar essa otimização foram definidas pequenas áreas de busca de tamanho fixo e posicionamento dinâmico. Essas áreas de busca vão isolar as regiões da imagem onde as faixas de sinalização estão contidas, reduzindo em até 75% a área total onde são aplicadas as técnicas utilizadas na extração de faixas. Os resultados experimentais mostraram que o algoritmo é robusto em diversas variações de iluminação ambiente, sombras e pavimentos com cores diferentes tanto em ambientes urbanos quanto em rodovias e autoestradas. Os resultados mostram uma taxa de detecção correta média de 98; 1%, com tempo médio de operação de 13,3 ms.Mobility is an imprint of our civilization. Both freight and passenger transport share a huge infrastructure of connecting links operated with the support of a sophisticated logistic system. As an optimized symbiosis of mechanical and electrical modules, vehicles are evolving continuously with the integration of technological advances and are engineered to offer the best in comfort, safety, speed and economy. Regulations organize the flow of road transportation machines and help on their interactions, stipulating rules to avoid conflicts. But driving can become stressing on different conditions, leaving human drivers prone to misjudgments and creating accident conditions. Efforts to reduce traffic accidents that may cause injuries and even deaths range from re-education campaigns to new technologies. These topics have increasingly attracted the attention of researchers and industries to Image-based Intelligent Transportation Systems that aim to prevent accidents and help your driver in the interpretation of urban signage forms. This work presents a study on real-time detection techniques of traffic signaling signs in urban and intermunicipal environments, aiming at the signaling lanes of the lane for the driver of the vehicle or autonomous vehicle, providing a greater control of the area of traffic destined to the vehicle and to provide alerts of possible risk situations. The main contribution of this work is to optimize how the image processing techniques are used to perform the lanes extraction, in order to reduce the computational cost of the system. To achieve this optimization, small search areas of fixed size and dynamic positioning were defined. These search areas will isolate the regions of the image where the signaling lanes are contained, reducing up to 75% the total area where the techniques used in the extraction of lanes are applied. The experimental results showed that the algorithm is robust in several variations of ambient light, shadows and pavements with different colors, in both urban environments and on highways and motorways. The results show an average detection rate of 98.1%, with average operating time of 13.3 ms

    Vehicle Tracking and Motion Estimation Based on Stereo Vision Sequences

    Get PDF
    In this dissertation, a novel approach for estimating trajectories of road vehicles such as cars, vans, or motorbikes, based on stereo image sequences is presented. Moving objects are detected and reliably tracked in real-time from within a moving car. The resulting information on the pose and motion state of other moving objects with respect to the own vehicle is an essential basis for future driver assistance and safety systems, e.g., for collision prediction. The focus of this contribution is on oncoming traffic, while most existing work in the literature addresses tracking the lead vehicle. The overall approach is generic and scalable to a variety of traffic scenes including inner city, country road, and highway scenarios. A considerable part of this thesis addresses oncoming traffic at urban intersections. The parameters to be estimated include the 3D position and orientation of an object relative to the ego-vehicle, as well as the object's shape, dimension, velocity, acceleration and the rotational velocity (yaw rate). The key idea is to derive these parameters from a set of tracked 3D points on the object's surface, which are registered to a time-consistent object coordinate system, by means of an extended Kalman filter. Combining the rigid 3D point cloud model with the dynamic model of a vehicle is one main contribution of this thesis. Vehicle tracking at intersections requires covering a wide range of different object dynamics, since vehicles can turn quickly. Three different approaches for tracking objects during highly dynamic turn maneuvers up to extreme maneuvers such as skidding are presented and compared. These approaches allow for an online adaptation of the filter parameter values, overcoming manual parameter tuning depending on the dynamics of the tracked object in the scene. This is the second main contribution. Further issues include the introduction of two initialization methods, a robust outlier handling, a probabilistic approach for assigning new points to a tracked object, as well as mid-level fusion of the vision-based approach with a radar sensor. The overall system is systematically evaluated both on simulated and real-world data. The experimental results show the proposed system is able to accurately estimate the object pose and motion parameters in a variety of challenging situations, including night scenes, quick turn maneuvers, and partial occlusions. The limits of the system are also carefully investigated.In dieser Dissertation wird ein Ansatz zur Trajektorienschätzung von Straßenfahrzeugen (PKW, Lieferwagen, Motorräder,...) anhand von Stereo-Bildfolgen vorgestellt. Bewegte Objekte werden in Echtzeit aus einem fahrenden Auto heraus automatisch detektiert, vermessen und deren Bewegungszustand relativ zum eigenen Fahrzeug zuverlässig bestimmt. Die gewonnenen Informationen liefern einen entscheidenden Grundstein für zukünftige Fahrerassistenz- und Sicherheitssysteme im Automobilbereich, beispielsweise zur Kollisionsprädiktion. Während der Großteil der existierenden Literatur das Detektieren und Verfolgen vorausfahrender Fahrzeuge in Autobahnszenarien adressiert, setzt diese Arbeit einen Schwerpunkt auf den Gegenverkehr, speziell an städtischen Kreuzungen. Der Ansatz ist jedoch grundsätzlich generisch und skalierbar für eine Vielzahl an Verkehrssituationen (Innenstadt, Landstraße, Autobahn). Die zu schätzenden Parameter beinhalten die räumliche Lage des anderen Fahrzeugs relativ zum eigenen Fahrzeug, die Objekt-Geschwindigkeit und -Längsbeschleunigung, sowie die Rotationsgeschwindigkeit (Gierrate) des beobachteten Objektes. Zusätzlich werden die Objektabmaße sowie die Objektform rekonstruiert. Die Grundidee ist es, diese Parameter anhand der Transformation von beobachteten 3D Punkten, welche eine ortsfeste Position auf der Objektoberfläche besitzen, mittels eines rekursiven Schätzers (Kalman Filter) zu bestimmen. Ein wesentlicher Beitrag dieser Arbeit liegt in der Kombination des Starrkörpermodells der Punktewolke mit einem Fahrzeugbewegungsmodell. An Kreuzungen können sehr unterschiedliche Dynamiken auftreten, von einer Geradeausfahrt mit konstanter Geschwindigkeit bis hin zum raschen Abbiegen. Um eine manuelle Parameteradaption abhängig von der jeweiligen Szene zu vermeiden, werden drei verschiedene Ansätze zur automatisierten Anpassung der Filterparameter an die vorliegende Situation vorgestellt und verglichen. Dies stellt den zweiten Hauptbeitrag der Arbeit dar. Weitere wichtige Beiträge sind zwei alternative Initialisierungsmethoden, eine robuste Ausreißerbehandlung, ein probabilistischer Ansatz zur Zuordnung neuer Objektpunkte, sowie die Fusion des bildbasierten Verfahrens mit einem Radar-Sensor. Das Gesamtsystem wird im Rahmen dieser Arbeit systematisch anhand von simulierten und realen Straßenverkehrsszenen evaluiert. Die Ergebnisse zeigen, dass das vorgestellte Verfahren in der Lage ist, die unbekannten Objektparameter auch unter schwierigen Umgebungsbedingungen, beispielsweise bei Nacht, schnellen Abbiegemanövern oder unter Teilverdeckungen, sehr präzise zu schätzen. Die Grenzen des Systems werden ebenfalls sorgfältig untersucht
    corecore