2,375 research outputs found
A saliency-based framework for 2D-3D registration
Abstract: Here we propose a saliency-based filtering approach to the problem of registering an untextured 3D object to a single monocular image. The principle of saliency can be applied to a range of modalities and domains to find intrinsically descriptive entities from amongst detected entities, making it a rigorous approach to multi-modal registration. We build on the Kadir-Brady saliency framework due to its principled information-theoretic approach which enables us to naturally extend it to the 3D domain. The salient points from each domain are initially aligned using the SoftPosit algorithm. This is subsequently refined by aligning the silhouette with contours extracted from the image. Whereas other point based registration algorithms focus on corners or straight lines, our saliency-based approach is more general as it is more widely applicable e.g. to curved surfaces where a corner detector would fail. We compare our salient point detector to the Harris corner and SIFT keypoint detectors and show it generally achieves superior registration accuracy.
Saliency-guided integration of multiple scans
we present a novel method..
Quantitative Analysis of Saliency Models
Previous saliency detection research required the reader to evaluate
performance qualitatively, based on renderings of saliency maps on a few
shapes. This qualitative approach meant it was unclear which saliency models
were better, or how well they compared to human perception. This paper provides
a quantitative evaluation framework that addresses this issue. In the first
quantitative analysis of 3D computational saliency models, we evaluate four
computational saliency models and two baseline models against ground-truth
saliency collected in previous work.Comment: 10 page
3DFeat-Net: Weakly Supervised Local 3D Features for Point Cloud Registration
In this paper, we propose the 3DFeat-Net which learns both 3D feature
detector and descriptor for point cloud matching using weak supervision. Unlike
many existing works, we do not require manual annotation of matching point
clusters. Instead, we leverage on alignment and attention mechanisms to learn
feature correspondences from GPS/INS tagged 3D point clouds without explicitly
specifying them. We create training and benchmark outdoor Lidar datasets, and
experiments show that 3DFeat-Net obtains state-of-the-art performance on these
gravity-aligned datasets.Comment: 17 pages, 6 figures. Accepted in ECCV 201
- …