4 research outputs found
Online Inference and Detection of Curbs in Partially Occluded Scenes with Sparse LIDAR
Road boundaries, or curbs, provide autonomous vehicles with essential
information when interpreting road scenes and generating behaviour plans.
Although curbs convey important information, they are difficult to detect in
complex urban environments (in particular in comparison to other elements of
the road such as traffic signs and road markings). These difficulties arise
from occlusions by other traffic participants as well as changing lighting
and/or weather conditions. Moreover, road boundaries have various shapes,
colours and structures while motion planning algorithms require accurate and
precise metric information in real-time to generate their plans.
In this paper, we present a real-time LIDAR-based approach for accurate curb
detection around the vehicle (360 degree). Our approach deals with both
occlusions from traffic and changing environmental conditions. To this end, we
project 3D LIDAR pointcloud data into 2D bird's-eye view images (akin to
Inverse Perspective Mapping). These images are then processed by trained deep
networks to infer both visible and occluded road boundaries. Finally, a
post-processing step filters detected curb segments and tracks them over time.
Experimental results demonstrate the effectiveness of the proposed approach on
real-world driving data. Hence, we believe that our LIDAR-based approach
provides an efficient and effective way to detect visible and occluded curbs
around the vehicles in challenging driving scenarios.Comment: Accepted at the 22nd IEEE Intelligent Transportation Systems
Conference (ITSC19), October, 2019, Auckland, New Zealan
LiDAR Lateral Localisation Despite Challenging Occlusion from Traffic
This paper presents a system for improving the robustness of LiDAR lateral
localisation systems. This is made possible by including detections of road
boundaries which are invisible to the sensor (due to occlusion, e.g. traffic)
but can be located by our Occluded Road Boundary Inference Deep Neural Network.
We show an example application in which fusion of a camera stream is used to
initialise the lateral localisation. We demonstrate over four driven forays
through central Oxford - totalling 40 km of driving - a gain in performance
that inferring of occluded road boundaries brings.Comment: accepted for publication at the IEEE/ION Position, Location and
Navigation Symposium (PLANS) 202
The Oxford Road Boundaries Dataset
In this paper we present the Oxford Road Boundaries Dataset, designed for
training and testing machine-learning-based road-boundary detection and
inference approaches. We have hand-annotated two of the 10 km-long forays from
the Oxford Robotcar Dataset and generated from other forays several thousand
further examples with semi-annotated road-boundary masks. To boost the number
of training samples in this way, we used a vision-based localiser to project
labels from the annotated datasets to other traversals at different times and
weather conditions. As a result, we release 62605 labelled samples, of which
47639 samples are curated. Each of these samples contains both raw and
classified masks for left and right lenses. Our data contains images from a
diverse set of scenarios such as straight roads, parked cars, junctions, etc.
Files for download and tools for manipulating the labelled data are available
at: oxford-robotics-institute.github.io/road-boundaries-datasetComment: Accepted for publication at the workshop "3D-DLAD: 3D-Deep Learning
for Autonomous Driving" (WS15), Intelligent Vehicles Symposium (IV 2021
Route boundary inference with vision and LiDAR
The purpose of roads is to carry vehicles. Human drivers can easily distinguish roads and their components (e.g., surfaces, boundaries) using direct and indirect (contextual) clues as they are designed with driving in mind. The colour of road tarmacs, shape of road turns, smoothness of road surfaces, traffic signs, road boundaries, buildings, or even other vehicles provide clues about roads. For autonomous vehicles to safely navigate to a desired location in complex driving scenarios they are required to perceive their surrounding environment even in the presence of occlusions. This requires the use of contextual information in a similar fashion to human perception.
In this thesis, we focus primarily on road boundary detection and present a deep learning based approach to capture contextual information for dealing with occlusions. Many scenes present large-scale occlusion by other road users, preventing direct approaches from fully detecting road boundaries. Conventional neural network architectures fail to infer the exact location of an occluded, narrow, continuous curve running through the image. We tackle this problem with a coupled approach that generates multi-scale parameterised outputs in a discrete-continuous form. We combine the power of deep learning with the data obtained from our novel annotation framework to detect and infer road boundaries irrespective of whether or not the boundaries are visible by taking inspiration from human perception, which uses contextual information to perceive beyond the visible spectrum. Our semi-supervised data annotation framework leverages visual localisation and facilitates the use of deep networks by providing an efficient way to generate thousands of training samples. We present two road boundary detection approaches, camera-based and LiDAR-based, that capture scene context and achieve accurate results. We also demonstrate that the presented approaches have utility in scene understanding and localisation.</p