1,935 research outputs found
The Right (Angled) Perspective: Improving the Understanding of Road Scenes Using Boosted Inverse Perspective Mapping
Many tasks performed by autonomous vehicles such as road marking detection,
object tracking, and path planning are simpler in bird's-eye view. Hence,
Inverse Perspective Mapping (IPM) is often applied to remove the perspective
effect from a vehicle's front-facing camera and to remap its images into a 2D
domain, resulting in a top-down view. Unfortunately, however, this leads to
unnatural blurring and stretching of objects at further distance, due to the
resolution of the camera, limiting applicability. In this paper, we present an
adversarial learning approach for generating a significantly improved IPM from
a single camera image in real time. The generated bird's-eye-view images
contain sharper features (e.g. road markings) and a more homogeneous
illumination, while (dynamic) objects are automatically removed from the scene,
thus revealing the underlying road layout in an improved fashion. We
demonstrate our framework using real-world data from the Oxford RobotCar
Dataset and show that scene understanding tasks directly benefit from our
boosted IPM approach.Comment: equal contribution of first two authors, 8 full pages, 6 figures,
accepted at IV 201
Online Inference and Detection of Curbs in Partially Occluded Scenes with Sparse LIDAR
Road boundaries, or curbs, provide autonomous vehicles with essential
information when interpreting road scenes and generating behaviour plans.
Although curbs convey important information, they are difficult to detect in
complex urban environments (in particular in comparison to other elements of
the road such as traffic signs and road markings). These difficulties arise
from occlusions by other traffic participants as well as changing lighting
and/or weather conditions. Moreover, road boundaries have various shapes,
colours and structures while motion planning algorithms require accurate and
precise metric information in real-time to generate their plans.
In this paper, we present a real-time LIDAR-based approach for accurate curb
detection around the vehicle (360 degree). Our approach deals with both
occlusions from traffic and changing environmental conditions. To this end, we
project 3D LIDAR pointcloud data into 2D bird's-eye view images (akin to
Inverse Perspective Mapping). These images are then processed by trained deep
networks to infer both visible and occluded road boundaries. Finally, a
post-processing step filters detected curb segments and tracks them over time.
Experimental results demonstrate the effectiveness of the proposed approach on
real-world driving data. Hence, we believe that our LIDAR-based approach
provides an efficient and effective way to detect visible and occluded curbs
around the vehicles in challenging driving scenarios.Comment: Accepted at the 22nd IEEE Intelligent Transportation Systems
Conference (ITSC19), October, 2019, Auckland, New Zealan
Robust lane detection in urban environments
Most of the lane marking detection algorithms reported in the literature are suitable for highway scenarios. This paper presents a novel clustered particle filter based approach to lane detection, which is suitable for urban streets in normal traffic conditions. Furthermore, a quality measure for the detection is calculated as a measure of reliability. The core of this approach is the usage of weak models, i.e. the avoidance of strong assumptions about the road geometry. Experiments were carried out in Sydney urban areas with a vehicle mounted laser range scanner and a ccd camera. Through experimentations, we have shown that a clustered particle filter can be used to efficiently extract lane markings. ©2007 IEEE
Multi-Lane Perception Using Feature Fusion Based on GraphSLAM
An extensive, precise and robust recognition and modeling of the environment
is a key factor for next generations of Advanced Driver Assistance Systems and
development of autonomous vehicles. In this paper, a real-time approach for the
perception of multiple lanes on highways is proposed. Lane markings detected by
camera systems and observations of other traffic participants provide the input
data for the algorithm. The information is accumulated and fused using
GraphSLAM and the result constitutes the basis for a multilane clothoid model.
To allow incorporation of additional information sources, input data is
processed in a generic format. Evaluation of the method is performed by
comparing real data, collected with an experimental vehicle on highways, to a
ground truth map. The results show that ego and adjacent lanes are robustly
detected with high quality up to a distance of 120 m. In comparison to serial
lane detection, an increase in the detection range of the ego lane and a
continuous perception of neighboring lanes is achieved. The method can
potentially be utilized for the longitudinal and lateral control of
self-driving vehicles
- …