4,090 research outputs found

    Detection of Parked Vehicles using Spatio-temporal Maps

    Full text link
    This paper presents a video-based approach to detect the presence of parked vehicles in street lanes. Potential applications include the detection of illegally and double-parked vehicles in urban scenarios and incident detection on roads. The technique extracts information from low-level feature points (Harris corners) to create spatiotemporal maps that describe what is happening in the scene. The method neither relies on background subtraction nor performs any form of object tracking. The system has been evaluated using private and public data sets and has proven to be robust against common difficulties found in closed-circuit television video, such as varying illumination, camera vibration, the presence of momentary occlusion by other vehicles, and high noise levels. © 2011 IEEE.This work was supported by the Spanish Government project Movilidad y automocion en Redes de Transporte Avanzadas (MARTA) under the Consorcios Estrategicos Nacionales de Investigacion Tecnologica (CENIT) program and the Comision Interministerial Ciencia Y Tecnologia (CICYT) under Contract TEC2009-09146. The Associate Editor for this paper was R. W. Goudy.Albiol Colomer, AJ.; Sanchis Pastor, L.; Albiol Colomer, A.; Mossi García, JM. (2011). Detection of Parked Vehicles using Spatio-temporal Maps. IEEE Transactions on Intelligent Transportation Systems. 12(4):1277-1291. https://doi.org/10.1109/TITS.2011.2156791S1277129112

    DeepSignals: Predicting Intent of Drivers Through Visual Signals

    Full text link
    Detecting the intention of drivers is an essential task in self-driving, necessary to anticipate sudden events like lane changes and stops. Turn signals and emergency flashers communicate such intentions, providing seconds of potentially critical reaction time. In this paper, we propose to detect these signals in video sequences by using a deep neural network that reasons about both spatial and temporal information. Our experiments on more than a million frames show high per-frame accuracy in very challenging scenarios.Comment: To be presented at the IEEE International Conference on Robotics and Automation (ICRA), 201

    Egocentric Vision-based Future Vehicle Localization for Intelligent Driving Assistance Systems

    Full text link
    Predicting the future location of vehicles is essential for safety-critical applications such as advanced driver assistance systems (ADAS) and autonomous driving. This paper introduces a novel approach to simultaneously predict both the location and scale of target vehicles in the first-person (egocentric) view of an ego-vehicle. We present a multi-stream recurrent neural network (RNN) encoder-decoder model that separately captures both object location and scale and pixel-level observations for future vehicle localization. We show that incorporating dense optical flow improves prediction results significantly since it captures information about motion as well as appearance change. We also find that explicitly modeling future motion of the ego-vehicle improves the prediction accuracy, which could be especially beneficial in intelligent and automated vehicles that have motion planning capability. To evaluate the performance of our approach, we present a new dataset of first-person videos collected from a variety of scenarios at road intersections, which are particularly challenging moments for prediction because vehicle trajectories are diverse and dynamic.Comment: To appear on ICRA 201
    corecore