23 research outputs found
Multi-Lane Perception Using Feature Fusion Based on GraphSLAM
An extensive, precise and robust recognition and modeling of the environment
is a key factor for next generations of Advanced Driver Assistance Systems and
development of autonomous vehicles. In this paper, a real-time approach for the
perception of multiple lanes on highways is proposed. Lane markings detected by
camera systems and observations of other traffic participants provide the input
data for the algorithm. The information is accumulated and fused using
GraphSLAM and the result constitutes the basis for a multilane clothoid model.
To allow incorporation of additional information sources, input data is
processed in a generic format. Evaluation of the method is performed by
comparing real data, collected with an experimental vehicle on highways, to a
ground truth map. The results show that ego and adjacent lanes are robustly
detected with high quality up to a distance of 120 m. In comparison to serial
lane detection, an increase in the detection range of the ego lane and a
continuous perception of neighboring lanes is achieved. The method can
potentially be utilized for the longitudinal and lateral control of
self-driving vehicles
Toward robust visual odometry using prior 2D map information and multiple hypothesis particle filtering
Visual odometry can be used to estimate the pose of a robot from current and recent video frames. A problem with these methods is that they drift over time due to the accumulation of estimation errors at each time-step. In this short paper we propose and briefly demonstrate the potential benefit of using prior 2D, top-down map information combined with multiple hypothesis particle filtering to correct visual odometry estimates. The results demonstrate a substantial improvement in robustness and accuracy over the sole use of visual odometry
You Are Here:Geolocation by Embedding Maps and Images
We present a novel approach to geolocalising panoramic images on a 2-D
cartographic map based on learning a low dimensional embedded space, which
allows a comparison between an image captured at a location and local
neighbourhoods of the map. The representation is not sufficiently
discriminatory to allow localisation from a single image, but when concatenated
along a route, localisation converges quickly, with over 90% accuracy being
achieved for routes of around 200m in length when using Google Street View and
Open Street Map data. The method generalises a previous fixed semantic feature
based approach and achieves significantly higher localisation accuracy and
faster convergence.Comment: 18 pages, new version accepted for ECCV 2020 (poster), with new
results on publicly available dataset and comparison with implementation of
previously published alternative approac
Pix2Map: Cross-modal Retrieval for Inferring Street Maps from Images
Self-driving vehicles rely on urban street maps for autonomous navigation. In
this paper, we introduce Pix2Map, a method for inferring urban street map
topology directly from ego-view images, as needed to continually update and
expand existing maps. This is a challenging task, as we need to infer a complex
urban road topology directly from raw image data. The main insight of this
paper is that this problem can be posed as cross-modal retrieval by learning a
joint, cross-modal embedding space for images and existing maps, represented as
discrete graphs that encode the topological layout of the visual surroundings.
We conduct our experimental evaluation using the Argoverse dataset and show
that it is indeed possible to accurately retrieve street maps corresponding to
both seen and unseen roads solely from image data. Moreover, we show that our
retrieved maps can be used to update or expand existing maps and even show
proof-of-concept results for visual localization and image retrieval from
spatial graphs.Comment: 12 pages, 8 figure
Machine Learning for Identifying Group Trajectory Outliers
Prior works on the trajectory outlier detection problem solely consider individual outliers. However, in real-world scenarios, trajectory outliers can often appear in groups, e.g., a group of bikes that deviates to the usual trajectory due to the maintenance of streets in the context of intelligent transportation. The current paper considers the Group Trajectory Outlier (GTO) problem and proposes three algorithms. The first and the second algorithms are extensions of the well-known DBSCAN and kNN algorithms, while the third one models the GTO problem as a feature selection problem. Furthermore, two different enhancements for the proposed algorithms are proposed. The first one is based on ensemble learning and computational intelligence, which allows for merging algorithms’ outputs to possibly improve the final result. The second is a general high-performance computing framework that deals with big trajectory databases, which we used for a GPU-based implementation. Experimental results on different real trajectory databases show the scalability of the proposed approaches.acceptedVersio