17,600 research outputs found
Voronoi-Based Curvature and Feature Estimation from Point Clouds
International audienceWe present an efficient and robust method for extracting curvature information, sharp features, and normal directions of a piecewise smooth surface from its point cloud sampling in a unified framework. Our method is integral in nature and uses convolved covariance matrices of Voronoi cells of the point cloud which makes it provably robust in the presence of noise. We show that these matrices contain information related to curvature in the smooth parts of the surface, and information about the directions and angles of sharp edges around the features of a piecewise-smooth surface. Our method is applicable in both two and three dimensions, and can be easily parallelized, making it possible to process arbitrarily large point clouds, which was a challenge for Voronoi-based methods. In addition, we describe a Monte-Carlo version of our method, which is applicable in any dimension. We illustrate the correctness of both principal curvature information and feature extraction in the presence of varying levels of noise and sampling density on a variety of models. As a sample application, we use our feature detection method to segment point cloud samplings of piecewise-smooth surfaces
LO-Net: Deep Real-time Lidar Odometry
We present a novel deep convolutional network pipeline, LO-Net, for real-time
lidar odometry estimation. Unlike most existing lidar odometry (LO) estimations
that go through individually designed feature selection, feature matching, and
pose estimation pipeline, LO-Net can be trained in an end-to-end manner. With a
new mask-weighted geometric constraint loss, LO-Net can effectively learn
feature representation for LO estimation, and can implicitly exploit the
sequential dependencies and dynamics in the data. We also design a scan-to-map
module, which uses the geometric and semantic information learned in LO-Net, to
improve the estimation accuracy. Experiments on benchmark datasets demonstrate
that LO-Net outperforms existing learning based approaches and has similar
accuracy with the state-of-the-art geometry-based approach, LOAM
Patch-based Progressive 3D Point Set Upsampling
We present a detail-driven deep neural network for point set upsampling. A
high-resolution point set is essential for point-based rendering and surface
reconstruction. Inspired by the recent success of neural image super-resolution
techniques, we progressively train a cascade of patch-based upsampling networks
on different levels of detail end-to-end. We propose a series of architectural
design contributions that lead to a substantial performance boost. The effect
of each technical contribution is demonstrated in an ablation study.
Qualitative and quantitative experiments show that our method significantly
outperforms the state-of-the-art learning-based and optimazation-based
approaches, both in terms of handling low-resolution inputs and revealing
high-fidelity details.Comment: accepted to cvpr2019, code available at https://github.com/yifita/P3
3D Registration of Aerial and Ground Robots for Disaster Response: An Evaluation of Features, Descriptors, and Transformation Estimation
Global registration of heterogeneous ground and aerial mapping data is a
challenging task. This is especially difficult in disaster response scenarios
when we have no prior information on the environment and cannot assume the
regular order of man-made environments or meaningful semantic cues. In this
work we extensively evaluate different approaches to globally register UGV
generated 3D point-cloud data from LiDAR sensors with UAV generated point-cloud
maps from vision sensors. The approaches are realizations of different
selections for: a) local features: key-points or segments; b) descriptors:
FPFH, SHOT, or ESF; and c) transformation estimations: RANSAC or FGR.
Additionally, we compare the results against standard approaches like applying
ICP after a good prior transformation has been given. The evaluation criteria
include the distance which a UGV needs to travel to successfully localize, the
registration error, and the computational cost. In this context, we report our
findings on effectively performing the task on two new Search and Rescue
datasets. Our results have the potential to help the community take informed
decisions when registering point-cloud maps from ground robots to those from
aerial robots.Comment: Awarded Best Paper at the 15th IEEE International Symposium on
Safety, Security, and Rescue Robotics 2017 (SSRR 2017
Point Cloud Structural Parts Extraction based on Segmentation Energy Minimization
In this work we consider 3D point sets, which in a typical setting represent unorganized point clouds. Segmentation of these point sets requires first to single out structural components of the unknown surface discretely approximated by the point cloud. Structural components, in turn, are surface patches approximating unknown parts of elementary geometric structures, such as planes, ellipsoids, spheres and so on. The approach used is based on level set methods computing the moving front of the surface and tracing the interfaces between different parts of it. Level set methods are widely recognized to be one of the most efficient methods to segment both 2D images and 3D medical images. Level set methods for 3D segmentation have recently received an increasing interest. We contribute by proposing a novel approach for raw point sets. Based on the motion and distance functions of the level set we introduce four energy minimization models, which are used for segmentation, by considering an equal number of distance functions specified by geometric features. Finally we evaluate the proposed algorithm on point sets simulating unorganized point clouds
DeepICP: An End-to-End Deep Neural Network for 3D Point Cloud Registration
We present DeepICP - a novel end-to-end learning-based 3D point cloud
registration framework that achieves comparable registration accuracy to prior
state-of-the-art geometric methods. Different from other keypoint based methods
where a RANSAC procedure is usually needed, we implement the use of various
deep neural network structures to establish an end-to-end trainable network.
Our keypoint detector is trained through this end-to-end structure and enables
the system to avoid the inference of dynamic objects, leverages the help of
sufficiently salient features on stationary objects, and as a result, achieves
high robustness. Rather than searching the corresponding points among existing
points, the key contribution is that we innovatively generate them based on
learned matching probabilities among a group of candidates, which can boost the
registration accuracy. Our loss function incorporates both the local similarity
and the global geometric constraints to ensure all above network designs can
converge towards the right direction. We comprehensively validate the
effectiveness of our approach using both the KITTI dataset and the
Apollo-SouthBay dataset. Results demonstrate that our method achieves
comparable or better performance than the state-of-the-art geometry-based
methods. Detailed ablation and visualization analysis are included to further
illustrate the behavior and insights of our network. The low registration error
and high robustness of our method makes it attractive for substantial
applications relying on the point cloud registration task.Comment: 10 pages, 6 figures, 3 tables, typos corrected, experimental results
updated, accepted by ICCV 201
- …