20,000 research outputs found

    Review of Environment Perception for Intelligent Vehicles

    Get PDF
    Overview of environment perception for intelligent vehicles supposes to the state-of-the-art algorithms and modeling methods are given, with a summary of their pros and cons. A special attention is paid to methods for lane and road detection, traffic sign recognition, vehicle tracking, behavior analysis, and scene understanding. Integrated lane and vehicle tracking for driver assistance system that improves on the performance of both lane tracking and vehicle tracking modules. Without specific hardware and software optimizations, the fully implemented system runs at near-real-time speeds of 11 frames per second. On-road vision-based vehicle detection, tracking, and behavior understanding. Vision based vehicle detection in the context of sensor-based on-road surround analysis. We detail advances in vehicle detection, discussing monocular, stereo vision, and active sensor–vision fusion for on-road vehicle detection. The traffic sign detection detailing detection systems for traffic sign recognition (TSR) for driver assistance. Inherently in traffic sign detection to the various stages: segmentation, feature extraction, and final sign detection

    The Right (Angled) Perspective: Improving the Understanding of Road Scenes Using Boosted Inverse Perspective Mapping

    Full text link
    Many tasks performed by autonomous vehicles such as road marking detection, object tracking, and path planning are simpler in bird's-eye view. Hence, Inverse Perspective Mapping (IPM) is often applied to remove the perspective effect from a vehicle's front-facing camera and to remap its images into a 2D domain, resulting in a top-down view. Unfortunately, however, this leads to unnatural blurring and stretching of objects at further distance, due to the resolution of the camera, limiting applicability. In this paper, we present an adversarial learning approach for generating a significantly improved IPM from a single camera image in real time. The generated bird's-eye-view images contain sharper features (e.g. road markings) and a more homogeneous illumination, while (dynamic) objects are automatically removed from the scene, thus revealing the underlying road layout in an improved fashion. We demonstrate our framework using real-world data from the Oxford RobotCar Dataset and show that scene understanding tasks directly benefit from our boosted IPM approach.Comment: equal contribution of first two authors, 8 full pages, 6 figures, accepted at IV 201
    • …
    corecore