3,606 research outputs found
Deep Neural Network for Structural Prediction and Lane Detection in Traffic Scene
© 2016 IEEE. Hierarchical neural networks have been shown to be effective in learning representative image features and recognizing object classes. However, most existing networks combine the low/middle level cues for classification without accounting for any spatial structures. For applications such as understanding a scene, how the visual cues are spatially distributed in an image becomes essential for successful analysis. This paper extends the framework of deep neural networks by accounting for the structural cues in the visual signals. In particular, two kinds of neural networks have been proposed. First, we develop a multitask deep convolutional network, which simultaneously detects the presence of the target and the geometric attributes (location and orientation) of the target with respect to the region of interest. Second, a recurrent neuron layer is adopted for structured visual detection. The recurrent neurons can deal with the spatial distribution of visible cues belonging to an object whose shape or structure is difficult to explicitly define. Both the networks are demonstrated by the practical task of detecting lane boundaries in traffic scenes. The multitask convolutional neural network provides auxiliary geometric information to help the subsequent modeling of the given lane structures. The recurrent neural network automatically detects lane boundaries, including those areas containing no marks, without any explicit prior knowledge or secondary modeling
The Right (Angled) Perspective: Improving the Understanding of Road Scenes Using Boosted Inverse Perspective Mapping
Many tasks performed by autonomous vehicles such as road marking detection,
object tracking, and path planning are simpler in bird's-eye view. Hence,
Inverse Perspective Mapping (IPM) is often applied to remove the perspective
effect from a vehicle's front-facing camera and to remap its images into a 2D
domain, resulting in a top-down view. Unfortunately, however, this leads to
unnatural blurring and stretching of objects at further distance, due to the
resolution of the camera, limiting applicability. In this paper, we present an
adversarial learning approach for generating a significantly improved IPM from
a single camera image in real time. The generated bird's-eye-view images
contain sharper features (e.g. road markings) and a more homogeneous
illumination, while (dynamic) objects are automatically removed from the scene,
thus revealing the underlying road layout in an improved fashion. We
demonstrate our framework using real-world data from the Oxford RobotCar
Dataset and show that scene understanding tasks directly benefit from our
boosted IPM approach.Comment: equal contribution of first two authors, 8 full pages, 6 figures,
accepted at IV 201
Towards End-to-End Lane Detection: an Instance Segmentation Approach
Modern cars are incorporating an increasing number of driver assist features,
among which automatic lane keeping. The latter allows the car to properly
position itself within the road lanes, which is also crucial for any subsequent
lane departure or trajectory planning decision in fully autonomous cars.
Traditional lane detection methods rely on a combination of highly-specialized,
hand-crafted features and heuristics, usually followed by post-processing
techniques, that are computationally expensive and prone to scalability due to
road scene variations. More recent approaches leverage deep learning models,
trained for pixel-wise lane segmentation, even when no markings are present in
the image due to their big receptive field. Despite their advantages, these
methods are limited to detecting a pre-defined, fixed number of lanes, e.g.
ego-lanes, and can not cope with lane changes. In this paper, we go beyond the
aforementioned limitations and propose to cast the lane detection problem as an
instance segmentation problem - in which each lane forms its own instance -
that can be trained end-to-end. To parametrize the segmented lane instances
before fitting the lane, we further propose to apply a learned perspective
transformation, conditioned on the image, in contrast to a fixed "bird's-eye
view" transformation. By doing so, we ensure a lane fitting which is robust
against road plane changes, unlike existing approaches that rely on a fixed,
pre-defined transformation. In summary, we propose a fast lane detection
algorithm, running at 50 fps, which can handle a variable number of lanes and
cope with lane changes. We verify our method on the tuSimple dataset and
achieve competitive results
- …