3,036 research outputs found
Multi-Lane Perception Using Feature Fusion Based on GraphSLAM
An extensive, precise and robust recognition and modeling of the environment
is a key factor for next generations of Advanced Driver Assistance Systems and
development of autonomous vehicles. In this paper, a real-time approach for the
perception of multiple lanes on highways is proposed. Lane markings detected by
camera systems and observations of other traffic participants provide the input
data for the algorithm. The information is accumulated and fused using
GraphSLAM and the result constitutes the basis for a multilane clothoid model.
To allow incorporation of additional information sources, input data is
processed in a generic format. Evaluation of the method is performed by
comparing real data, collected with an experimental vehicle on highways, to a
ground truth map. The results show that ego and adjacent lanes are robustly
detected with high quality up to a distance of 120 m. In comparison to serial
lane detection, an increase in the detection range of the ego lane and a
continuous perception of neighboring lanes is achieved. The method can
potentially be utilized for the longitudinal and lateral control of
self-driving vehicles
A Flexible Modeling Approach for Robust Multi-Lane Road Estimation
A robust estimation of road course and traffic lanes is an essential part of
environment perception for next generations of Advanced Driver Assistance
Systems and development of self-driving vehicles. In this paper, a flexible
method for modeling multiple lanes in a vehicle in real time is presented.
Information about traffic lanes, derived by cameras and other environmental
sensors, that is represented as features, serves as input for an iterative
expectation-maximization method to estimate a lane model. The generic and
modular concept of the approach allows to freely choose the mathematical
functions for the geometrical description of lanes. In addition to the current
measurement data, the previously estimated result as well as additional
constraints to reflect parallelism and continuity of traffic lanes, are
considered in the optimization process. As evaluation of the lane estimation
method, its performance is showcased using cubic splines for the geometric
representation of lanes in simulated scenarios and measurements recorded using
a development vehicle. In a comparison to ground truth data, robustness and
precision of the lanes estimated up to a distance of 120 m are demonstrated. As
a part of the environmental modeling, the presented method can be utilized for
longitudinal and lateral control of autonomous vehicles
The Right (Angled) Perspective: Improving the Understanding of Road Scenes Using Boosted Inverse Perspective Mapping
Many tasks performed by autonomous vehicles such as road marking detection,
object tracking, and path planning are simpler in bird's-eye view. Hence,
Inverse Perspective Mapping (IPM) is often applied to remove the perspective
effect from a vehicle's front-facing camera and to remap its images into a 2D
domain, resulting in a top-down view. Unfortunately, however, this leads to
unnatural blurring and stretching of objects at further distance, due to the
resolution of the camera, limiting applicability. In this paper, we present an
adversarial learning approach for generating a significantly improved IPM from
a single camera image in real time. The generated bird's-eye-view images
contain sharper features (e.g. road markings) and a more homogeneous
illumination, while (dynamic) objects are automatically removed from the scene,
thus revealing the underlying road layout in an improved fashion. We
demonstrate our framework using real-world data from the Oxford RobotCar
Dataset and show that scene understanding tasks directly benefit from our
boosted IPM approach.Comment: equal contribution of first two authors, 8 full pages, 6 figures,
accepted at IV 201
- …