18,863 research outputs found
Real-time Dynamic Object Detection for Autonomous Driving using Prior 3D-Maps
Lidar has become an essential sensor for autonomous driving as it provides
reliable depth estimation. Lidar is also the primary sensor used in building 3D
maps which can be used even in the case of low-cost systems which do not use
Lidar. Computation on Lidar point clouds is intensive as it requires processing
of millions of points per second. Additionally there are many subsequent tasks
such as clustering, detection, tracking and classification which makes
real-time execution challenging. In this paper, we discuss real-time dynamic
object detection algorithms which leverages previously mapped Lidar point
clouds to reduce processing. The prior 3D maps provide a static background
model and we formulate dynamic object detection as a background subtraction
problem. Computation and modeling challenges in the mapping and online
execution pipeline are described. We propose a rejection cascade architecture
to subtract road regions and other 3D regions separately. We implemented an
initial version of our proposed algorithm and evaluated the accuracy on CARLA
simulator.Comment: Preprint Submission to ECCVW AutoNUE 2018 - v2 author name accent
correctio
Automatic Vector-based Road Structure Mapping Using Multi-beam LiDAR
In this paper, we studied a SLAM method for vector-based road structure
mapping using multi-beam LiDAR. We propose to use the polyline as the primary
mapping element instead of grid cell or point cloud, because the vector-based
representation is precise and lightweight, and it can directly generate
vector-based High-Definition (HD) driving map as demanded by autonomous driving
systems. We explored: 1) the extraction and vectorization of road structures
based on local probabilistic fusion. 2) the efficient vector-based matching
between frames of road structures. 3) the loop closure and optimization based
on the pose-graph. In this study, we took a specific road structure, the road
boundary, as an example. We applied the proposed matching method in three
different scenes and achieved the average absolute matching error of 0.07. We
further applied the mapping system to the urban road with the length of 860
meters and achieved an average global accuracy of 0.466 m without the help of
high precision GPS
Structured Hough Voting for Vision-based Highway Border Detection
We propose a vision-based highway border detection algorithm using structured
Hough voting. Our approach takes advantage of the geometric relationship
between highway road borders and highway lane markings. It uses a strategy
where a number of trained road border and lane marking detectors are triggered,
followed by Hough voting to generate corresponding detection of the border and
lane marking. Since the initially triggered detectors usually result in large
number of positives, conventional frame-wise Hough voting is not able to always
generate robust border and lane marking results. Therefore, we formulate this
problem as a joint detection-and-tracking problem under the structured Hough
voting model, where tracking refers to exploiting inter-frame structural
information to stabilize the detection results. Both qualitative and
quantitative evaluations show the superiority of the proposed structured Hough
voting model over a number of baseline methods
Road Detection through Supervised Classification
Autonomous driving is a rapidly evolving technology. Autonomous vehicles are
capable of sensing their environment and navigating without human input through
sensory information such as radar, lidar, GNSS, vehicle odometry, and computer
vision. This sensory input provides a rich dataset that can be used in
combination with machine learning models to tackle multiple problems in
supervised settings. In this paper we focus on road detection through
gray-scale images as the sole sensory input. Our contributions are twofold:
first, we introduce an annotated dataset of urban roads for machine learning
tasks; second, we introduce a road detection framework on this dataset through
supervised classification and hand-crafted feature vectors
Vehicle Local Position Estimation System
In this paper, a robust vehicle local position estimation with the help of
single camera sensor and GPS is presented. A modified Inverse Perspective
Mapping, illuminant Invariant techniques and object detection based approach is
used to localize the vehicle in the road. Vehicles current lane, its position
from road boundary and other cars are used to define its local position. For
this purpose Lane markings are detected using a Laplacian edge feature, robust
to shadowing. Effect of shadowing and extra sun light are removed using Lab
color space and illuminant invariant techniques. Lanes are assumed to be as
parabolic model and fitted using robust RANSAC. This method can reliably detect
all lanes of the road, estimate lane departure angle and local position of
vehicle relative to lanes, road boundary and other cars. Different type of
obstacle like pedestrians, vehicles are detected using HOG feature based
deformable part model.Comment: Accepted in ICVES-2014, Hyderabad, Indi
Road Detection Technique Using Filters with Application to Autonomous Driving System
Autonomous driving systems are broadly used equipment in the industries and
in our daily lives, they assist in production, but are majorly used for
exploration in dangerous or unfamiliar locations. Thus, for a successful
exploration, navigation plays a significant role. Road detection is an
essential factor that assists autonomous robots achieved perfect navigation.
Various techniques using camera sensors have been proposed by numerous scholars
with inspiring results, but their techniques are still vulnerable to these
environmental noises: rain, snow, light intensity and shadow. In addressing
these problems, this paper proposed to enhance the road detection system with
filtering algorithm to overcome these limitations. Normalized Differences Index
(NDI) and morphological operation are the filtering algorithms used to address
the effect of shadow and guidance and re-guidance image filtering algorithms
are used to address the effect of rain and/or snow, while dark channel image
and specular-to-diffuse are the filters used to address light intensity
effects. The experimental performance of the road detection system with
filtering algorithms was tested qualitatively and quantitatively using the
following evaluation schemes: False Negative Rate (FNR) and False Positive Rate
(FPR). Comparison results of the road detection system with and without
filtering algorithm shows the filtering algorithm's capability to suppress the
effect of environmental noises because better road/non-road classification is
achieved by the road detection system. with filtering algorithm. This
achievement has further improved path planning/region classification for
autonomous driving systemComment: 7 pages, 7 figures, International Journal of Computing,
Communications & Instrumentation Engg. (IJCCIE) 201
A Robust Lane Detection and Departure Warning System
In this work, we have developed a robust lane detection and departure warning
technique. Our system is based on single camera sensor. For lane detection a
modified Inverse Perspective Mapping using only a few extrinsic camera
parameters and illuminant Invariant techniques is used. Lane markings are
represented using a combination of 2nd and 4th order steerable filters, robust
to shadowing. Effect of shadowing and extra sun light are removed using Lab
color space, and illuminant invariant representation. Lanes are assumed to be
cubic curves and fitted using robust RANSAC. This method can reliably detect
lanes of the road and its boundary. This method has been experimented in Indian
road conditions under different challenging situations and the result obtained
were very good. For lane departure angle an optical flow based method were
used.Comment: The Intelligent Vehicles Symposium (IV2015). arXiv admin note: text
overlap with arXiv:1503.0664
Deep Learning for LiDAR Point Clouds in Autonomous Driving: A Review
Recently, the advancement of deep learning in discriminative feature learning
from 3D LiDAR data has led to rapid development in the field of autonomous
driving. However, automated processing uneven, unstructured, noisy, and massive
3D point clouds is a challenging and tedious task. In this paper, we provide a
systematic review of existing compelling deep learning architectures applied in
LiDAR point clouds, detailing for specific tasks in autonomous driving such as
segmentation, detection, and classification. Although several published
research papers focus on specific topics in computer vision for autonomous
vehicles, to date, no general survey on deep learning applied in LiDAR point
clouds for autonomous vehicles exists. Thus, the goal of this paper is to
narrow the gap in this topic. More than 140 key contributions in the recent
five years are summarized in this survey, including the milestone 3D deep
architectures, the remarkable deep learning applications in 3D semantic
segmentation, object detection, and classification; specific datasets,
evaluation metrics, and the state of the art performance. Finally, we conclude
the remaining challenges and future researches.Comment: 21 pages, submitted to IEEE Transactions on Neural Networks and
Learning System
A state of the art of urban reconstruction: street, street network, vegetation, urban feature
World population is raising, especially the part of people living in cities.
With increased population and complex roles regarding their inhabitants and
their surroundings, cities concentrate difficulties for design, planning and
analysis. These tasks require a way to reconstruct/model a city. Traditionally,
much attention has been given to buildings reconstruction, yet an essential
part of city were neglected: streets. Streets reconstruction has been seldom
researched. Streets are also complex compositions of urban features, and have a
unique role for transportation (as they comprise roads). We aim at completing
the recent state of the art for building reconstruction (Musialski2012) by
considering all other aspect of urban reconstruction. We introduce the need for
city models. Because reconstruction always necessitates data, we first analyse
which data are available. We then expose a state of the art of street
reconstruction, street network reconstruction, urban features
reconstruction/modelling, vegetation , and urban objects
reconstruction/modelling.
Although reconstruction strategies vary widely, we can order them by the role
the model plays, from data driven approach, to model-based approach, to inverse
procedural modelling and model catalogue matching. The main challenges seems to
come from the complex nature of urban environment and from the limitations of
the available data. Urban features have strong relationships, between them, and
to their surrounding, as well as in hierarchical relations. Procedural
modelling has the power to express these relations, and could be applied to the
reconstruction of urban features via the Inverse Procedural Modelling paradigm.Comment: Extracted from PhD (chap1
Embedding Structured Contour and Location Prior in Siamesed Fully Convolutional Networks for Road Detection
Road detection from the perspective of moving vehicles is a challenging issue
in autonomous driving. Recently, many deep learning methods spring up for this
task because they can extract high-level local features to find road regions
from raw RGB data, such as Convolutional Neural Networks (CNN) and Fully
Convolutional Networks (FCN). However, how to detect the boundary of road
accurately is still an intractable problem. In this paper, we propose a
siamesed fully convolutional networks (named as ``s-FCN-loc''), which is able
to consider RGB-channel images, semantic contours and location priors
simultaneously to segment road region elaborately. To be specific, the
s-FCN-loc has two streams to process the original RGB images and contour maps
respectively. At the same time, the location prior is directly appended to the
siamesed FCN to promote the final detection performance. Our contributions are
threefold: (1) An s-FCN-loc is proposed that learns more discriminative
features of road boundaries than the original FCN to detect more accurate road
regions; (2) Location prior is viewed as a type of feature map and directly
appended to the final feature map in s-FCN-loc to promote the detection
performance effectively, which is easier than other traditional methods, namely
different priors for different inputs (image patches); (3) The convergent speed
of training s-FCN-loc model is 30\% faster than the original FCN, because of
the guidance of highly structured contours. The proposed approach is evaluated
on KITTI Road Detection Benchmark and One-Class Road Detection Dataset, and
achieves a competitive result with state of the arts.Comment: IEEE T-ITS 201
- …