1,464 research outputs found

    Towards Full Automated Drive in Urban Environments: A Demonstration in GoMentum Station, California

    Full text link
    Each year, millions of motor vehicle traffic accidents all over the world cause a large number of fatalities, injuries and significant material loss. Automated Driving (AD) has potential to drastically reduce such accidents. In this work, we focus on the technical challenges that arise from AD in urban environments. We present the overall architecture of an AD system and describe in detail the perception and planning modules. The AD system, built on a modified Acura RLX, was demonstrated in a course in GoMentum Station in California. We demonstrated autonomous handling of 4 scenarios: traffic lights, cross-traffic at intersections, construction zones and pedestrians. The AD vehicle displayed safe behavior and performed consistently in repeated demonstrations with slight variations in conditions. Overall, we completed 44 runs, encompassing 110km of automated driving with only 3 cases where the driver intervened the control of the vehicle, mostly due to error in GPS positioning. Our demonstration showed that robust and consistent behavior in urban scenarios is possible, yet more investigation is necessary for full scale roll-out on public roads.Comment: Accepted to Intelligent Vehicles Conference (IV 2017

    Video analysis based vehicle detection and tracking using an MCMC sampling framework

    Full text link
    This article presents a probabilistic method for vehicle detection and tracking through the analysis of monocular images obtained from a vehicle-mounted camera. The method is designed to address the main shortcomings of traditional particle filtering approaches, namely Bayesian methods based on importance sampling, for use in traffic environments. These methods do not scale well when the dimensionality of the feature space grows, which creates significant limitations when tracking multiple objects. Alternatively, the proposed method is based on a Markov chain Monte Carlo (MCMC) approach, which allows efficient sampling of the feature space. The method involves important contributions in both the motion and the observation models of the tracker. Indeed, as opposed to particle filter-based tracking methods in the literature, which typically resort to observation models based on appearance or template matching, in this study a likelihood model that combines appearance analysis with information from motion parallax is introduced. Regarding the motion model, a new interaction treatment is defined based on Markov random fields (MRF) that allows for the handling of possible inter-dependencies in vehicle trajectories. As for vehicle detection, the method relies on a supervised classification stage using support vector machines (SVM). The contribution in this field is twofold. First, a new descriptor based on the analysis of gradient orientations in concentric rectangles is dened. This descriptor involves a much smaller feature space compared to traditional descriptors, which are too costly for real-time applications. Second, a new vehicle image database is generated to train the SVM and made public. The proposed vehicle detection and tracking method is proven to outperform existing methods and to successfully handle challenging situations in the test sequences

    Developing Predictive Models of Driver Behaviour for the Design of Advanced Driving Assistance Systems

    Get PDF
    World-wide injuries in vehicle accidents have been on the rise in recent years, mainly due to driver error. The main objective of this research is to develop a predictive system for driving maneuvers by analyzing the cognitive behavior (cephalo-ocular) and the driving behavior of the driver (how the vehicle is being driven). Advanced Driving Assistance Systems (ADAS) include different driving functions, such as vehicle parking, lane departure warning, blind spot detection, and so on. While much research has been performed on developing automated co-driver systems, little attention has been paid to the fact that the driver plays an important role in driving events. Therefore, it is crucial to monitor events and factors that directly concern the driver. As a goal, we perform a quantitative and qualitative analysis of driver behavior to find its relationship with driver intentionality and driving-related actions. We have designed and developed an instrumented vehicle (RoadLAB) that is able to record several synchronized streams of data, including the surrounding environment of the driver, vehicle functions and driver cephalo-ocular behavior, such as gaze/head information. We subsequently analyze and study the behavior of several drivers to find out if there is a meaningful relation between driver behavior and the next driving maneuver

    Integration of ADAS algorithm in a Vehicle Prototype

    No full text
    International audienceFor several years, INRIA and Toyota Europe have been working together in the development of algorithms directed to ADAS. This paper will describe the main results of this successful joint project, applied to a prototype vehicle equipped with several sensors. This work will detail the framework, steps taken and motivation behind the developed technologies, as well as address the requirements needed for the automobile industry

    Provident vehicle detection at night for advanced driver assistance systems

    Get PDF
    In recent years, computer vision algorithms have become more powerful, which enabled technologies such as autonomous driving to evolve rapidly. However, current algorithms mainly share one limitation: They rely on directly visible objects. This is a significant drawback compared to human behavior, where visual cues caused by objects (e. g., shadows) are already used intuitively to retrieve information or anticipate occurring objects. While driving at night, this performance deficit becomes even more obvious: Humans already process the light artifacts caused by the headlamps of oncoming vehicles to estimate where they appear, whereas current object detection systems require that the oncoming vehicle is directly visible before it can be detected. Based on previous work on this subject, in this paper, we present a complete system that can detect light artifacts caused by the headlights of oncoming vehicles so that it detects that a vehicle is approaching providently (denoted as provident vehicle detection). For that, an entire algorithm architecture is investigated, including the detection in the image space, the three-dimensional localization, and the tracking of light artifacts. To demonstrate the usefulness of such an algorithm, the proposed algorithm is deployed in a test vehicle to use the detected light artifacts to control the glare-free high beam system proactively (react before the oncoming vehicle is directly visible). Using this experimental setting, the provident vehicle detection system’s time benefit compared to an in-production computer vision system is quantified. Additionally, the glare-free high beam use case provides a real-time and real-world visualization interface of the detection results by considering the adaptive headlamps as projectors. With this investigation of provident vehicle detection, we want to put awareness on the unconventional sensing task of detecting objects providently (detection based on observable visual cues the objects cause before they are visible) and further close the performance gap between human behavior and computer vision algorithms to bring autonomous and automated driving a step forward
    corecore