2,202 research outputs found
Robust Dense Mapping for Large-Scale Dynamic Environments
We present a stereo-based dense mapping algorithm for large-scale dynamic
urban environments. In contrast to other existing methods, we simultaneously
reconstruct the static background, the moving objects, and the potentially
moving but currently stationary objects separately, which is desirable for
high-level mobile robotic tasks such as path planning in crowded environments.
We use both instance-aware semantic segmentation and sparse scene flow to
classify objects as either background, moving, or potentially moving, thereby
ensuring that the system is able to model objects with the potential to
transition from static to dynamic, such as parked cars. Given camera poses
estimated from visual odometry, both the background and the (potentially)
moving objects are reconstructed separately by fusing the depth maps computed
from the stereo input. In addition to visual odometry, sparse scene flow is
also used to estimate the 3D motions of the detected moving objects, in order
to reconstruct them accurately. A map pruning technique is further developed to
improve reconstruction accuracy and reduce memory consumption, leading to
increased scalability. We evaluate our system thoroughly on the well-known
KITTI dataset. Our system is capable of running on a PC at approximately 2.5Hz,
with the primary bottleneck being the instance-aware semantic segmentation,
which is a limitation we hope to address in future work. The source code is
available from the project website (http://andreibarsan.github.io/dynslam).Comment: Presented at IEEE International Conference on Robotics and Automation
(ICRA), 201
Point cloud segmentation using hierarchical tree for architectural models
Recent developments in the 3D scanning technologies have made the generation
of highly accurate 3D point clouds relatively easy but the segmentation of
these point clouds remains a challenging area. A number of techniques have set
precedent of either planar or primitive based segmentation in literature. In
this work, we present a novel and an effective primitive based point cloud
segmentation algorithm. The primary focus, i.e. the main technical contribution
of our method is a hierarchical tree which iteratively divides the point cloud
into segments. This tree uses an exclusive energy function and a 3D
convolutional neural network, HollowNets to classify the segments. We test the
efficacy of our proposed approach using both real and synthetic data obtaining
an accuracy greater than 90% for domes and minarets.Comment: 9 pages. 10 figures. Submitted in EuroGraphics 201
Weighted simplicial complex reconstruction from mobile laser scanning using sensor topology
We propose a new method for the reconstruction of simplicial complexes
(combining points, edges and triangles) from 3D point clouds from Mobile Laser
Scanning (MLS). Our method uses the inherent topology of the MLS sensor to
define a spatial adjacency relationship between points. We then investigate
each possible connexion between adjacent points, weighted according to its
distance to the sensor, and filter them by searching collinear structures in
the scene, or structures perpendicular to the laser beams. Next, we create and
filter triangles for each triplet of self-connected edges and according to
their local planarity. We compare our results to an unweighted simplicial
complex reconstruction.Comment: 8 pages, 11 figures, CFPT 2018. arXiv admin note: substantial text
overlap with arXiv:1802.0748
A Framework for SAR-Optical Stereogrammetry over Urban Areas
Currently, numerous remote sensing satellites provide a huge volume of
diverse earth observation data. As these data show different features regarding
resolution, accuracy, coverage, and spectral imaging ability, fusion techniques
are required to integrate the different properties of each sensor and produce
useful information. For example, synthetic aperture radar (SAR) data can be
fused with optical imagery to produce 3D information using stereogrammetric
methods. The main focus of this study is to investigate the possibility of
applying a stereogrammetry pipeline to very-high-resolution (VHR) SAR-optical
image pairs. For this purpose, the applicability of semi-global matching is
investigated in this unconventional multi-sensor setting. To support the image
matching by reducing the search space and accelerating the identification of
correct, reliable matches, the possibility of establishing an epipolarity
constraint for VHR SAR-optical image pairs is investigated as well. In
addition, it is shown that the absolute geolocation accuracy of VHR optical
imagery with respect to VHR SAR imagery such as provided by TerraSAR-X can be
improved by a multi-sensor block adjustment formulation based on rational
polynomial coefficients. Finally, the feasibility of generating point clouds
with a median accuracy of about 2m is demonstrated and confirms the potential
of 3D reconstruction from SAR-optical image pairs over urban areas.Comment: This is the pre-acceptance version, to read the final version, please
go to ISPRS Journal of Photogrammetry and Remote Sensing on ScienceDirec
- …