6,676 research outputs found
Mesh-based 3D Textured Urban Mapping
In the era of autonomous driving, urban mapping represents a core step to let
vehicles interact with the urban context. Successful mapping algorithms have
been proposed in the last decade building the map leveraging on data from a
single sensor. The focus of the system presented in this paper is twofold: the
joint estimation of a 3D map from lidar data and images, based on a 3D mesh,
and its texturing. Indeed, even if most surveying vehicles for mapping are
endowed by cameras and lidar, existing mapping algorithms usually rely on
either images or lidar data; moreover both image-based and lidar-based systems
often represent the map as a point cloud, while a continuous textured mesh
representation would be useful for visualization and navigation purposes. In
the proposed framework, we join the accuracy of the 3D lidar data, and the
dense information and appearance carried by the images, in estimating a
visibility consistent map upon the lidar measurements, and refining it
photometrically through the acquired images. We evaluate the proposed framework
against the KITTI dataset and we show the performance improvement with respect
to two state of the art urban mapping algorithms, and two widely used surface
reconstruction algorithms in Computer Graphics.Comment: accepted at iros 201
The Right (Angled) Perspective: Improving the Understanding of Road Scenes Using Boosted Inverse Perspective Mapping
Many tasks performed by autonomous vehicles such as road marking detection,
object tracking, and path planning are simpler in bird's-eye view. Hence,
Inverse Perspective Mapping (IPM) is often applied to remove the perspective
effect from a vehicle's front-facing camera and to remap its images into a 2D
domain, resulting in a top-down view. Unfortunately, however, this leads to
unnatural blurring and stretching of objects at further distance, due to the
resolution of the camera, limiting applicability. In this paper, we present an
adversarial learning approach for generating a significantly improved IPM from
a single camera image in real time. The generated bird's-eye-view images
contain sharper features (e.g. road markings) and a more homogeneous
illumination, while (dynamic) objects are automatically removed from the scene,
thus revealing the underlying road layout in an improved fashion. We
demonstrate our framework using real-world data from the Oxford RobotCar
Dataset and show that scene understanding tasks directly benefit from our
boosted IPM approach.Comment: equal contribution of first two authors, 8 full pages, 6 figures,
accepted at IV 201
Incremental Object Database: Building 3D Models from Multiple Partial Observations
Collecting 3D object datasets involves a large amount of manual work and is
time consuming. Getting complete models of objects either requires a 3D scanner
that covers all the surfaces of an object or one needs to rotate it to
completely observe it. We present a system that incrementally builds a database
of objects as a mobile agent traverses a scene. Our approach requires no prior
knowledge of the shapes present in the scene. Object-like segments are
extracted from a global segmentation map, which is built online using the input
of segmented RGB-D images. These segments are stored in a database, matched
among each other, and merged with other previously observed instances. This
allows us to create and improve object models on the fly and to use these
merged models to reconstruct also unobserved parts of the scene. The database
contains each (potentially merged) object model only once, together with a set
of poses where it was observed. We evaluate our pipeline with one public
dataset, and on a newly created Google Tango dataset containing four indoor
scenes with some of the objects appearing multiple times, both within and
across scenes
- …