19 research outputs found

    LRF-Net: Learning Local Reference Frames for 3D Local Shape Description and Matching

    Full text link
    The local reference frame (LRF) acts as a critical role in 3D local shape description and matching. However, most of existing LRFs are hand-crafted and suffer from limited repeatability and robustness. This paper presents the first attempt to learn an LRF via a Siamese network that needs weak supervision only. In particular, we argue that each neighboring point in the local surface gives a unique contribution to LRF construction and measure such contributions via learned weights. Extensive analysis and comparative experiments on three public datasets addressing different application scenarios have demonstrated that LRF-Net is more repeatable and robust than several state-of-the-art LRF methods (LRF-Net is only trained on one dataset). In addition, LRF-Net can significantly boost the local shape description and 6-DoF pose estimation performance when matching 3D point clouds.Comment: 28 pages, 14 figure

    Histogram of distances for local surface description

    Get PDF
    3D object recognition is proven superior compared to its 2D counterpart with numerous implementations, making it a current research topic. Local based proposals specifically, although being quite accurate, they limit their performance on the stability of their local reference frame or axis (LRF/A) on which the descriptors are defined. Additionally, extra processing time is demanded to estimate the LRF for each local patch. We propose a 3D descriptor which overrides the necessity of a LRF/A reducing dramatically processing time needed. In addition robustness to high levels of noise and non-uniform subsampling is achieved. Our approach, namely Histogram of Distances is based on multiple L2-norm metrics of local patches providing a simple and fast to compute descriptor suitable for time-critical applications. Evaluation on both high and low quality popular point clouds showed its promising performance

    Visual pose estimation system for autonomous rendezvous of spacecraft

    Get PDF
    In this work, a tracker spacecraft equipped with a short-range vision system is tasked with visually identifying a target spacecraft and determining its relative angular velocity and relative linear velocity using only visual information from onboard cameras. Focusing on methods that are feasible for implementation on relatively simple spacecraft hardware, we locate and track objects in three-dimensional space using conventional high-resolution cameras, saving cost and power compared to laser or infrared ranging systems. Identification of the target is done by means of visual feature detection and tracking across rapid, successive frames, taking the perspective matrix of the camera system into account, and building feature maps in three dimensions over time. Features detected in two-dimensional images are matched and triangulated to provide three-dimensional feature maps using structure-from-motion techniques. This methodology allows one, two, or more cameras with known baselines to be used for triangulation, with more images resulting in higher accuracy. Triangulated points are organized by means of orientation histogram descriptors and used to identify and track parts of the target spacecraft over time. This allows some estimation of the target spacecraft's motion even if parts of the spacecraft are obscured or in shadow. The state variables with respect to the camera system are extracted as a relative rotation quaternion and relative translation vector for the target. Robust tracking of the state variables for the target spacecraft is accomplished by an embedded adaptive unscented Kalman filter. In addition to estimation of the target quaternion from visual Information, the adaptive filter can also identify when tracking errors have occurred by measurement of the residual. Significant variations in lighting can be tolerated as long as the movement of the satellite is consistent with the system model, and illumination changes slowly enough for state variables to be estimated periodically. Inertial measurements over short periods of time can then be used to determine the movement of both the tracker and target spacecraft. In addition, with a sufficient number of features tracked, the center of mass of the target can be located. This method is tested using laboratory images of spacecraft movement with a simulated spacecraft movement model. Varying conditions are applied to demonstrate the effectiveness and limitations of the system for online estimation of the movement of a target spacecraft at close range

    Global Context Aware Convolutions for 3D Point Cloud Understanding

    Full text link
    Recent advances in deep learning for 3D point clouds have shown great promises in scene understanding tasks thanks to the introduction of convolution operators to consume 3D point clouds directly in a neural network. Point cloud data, however, could have arbitrary rotations, especially those acquired from 3D scanning. Recent works show that it is possible to design point cloud convolutions with rotation invariance property, but such methods generally do not perform as well as translation-invariant only convolution. We found that a key reason is that compared to point coordinates, rotation-invariant features consumed by point cloud convolution are not as distinctive. To address this problem, we propose a novel convolution operator that enhances feature distinction by integrating global context information from the input point cloud to the convolution. To this end, a globally weighted local reference frame is constructed in each point neighborhood in which the local point set is decomposed into bins. Anchor points are generated in each bin to represent global shape features. A convolution can then be performed to transform the points and anchor features into final rotation-invariant features. We conduct several experiments on point cloud classification, part segmentation, shape retrieval, and normals estimation to evaluate our convolution, which achieves state-of-the-art accuracy under challenging rotations

    Recalage hétérogène de nuages de points 3D : Application à  l'imagerie sous-marine

    Get PDF
    National audienceThe registration of two 3D point clouds is an essential step in many applications. The objective of our work is to estimate the isometric transformation to merge two heterogeneous point clouds obtained from two different sensors. In this paper, we present a new approach for 3D - 3D registration which is distinguished by the nature of the extracted signature on each point and by the similarity criterion used to measure the degree of similarity. The descriptor that we propose is invariant to the rotation and also to the translation and overcomes the problem of multi - resolution that is related to heterogeneous data. At the end, our approach has been tested on synthetic data and applied on heterogeneou s real data.Le recalage de deux nuages de points 3D est une étape essentielle dans de nombreuses applications. L’objectif de notre travail est d’estimer une transformation isométrique permettant de fusionner au mieux deux ensembles hétérogènes de points issus de deux capteurs différents. Dans cet article, nous présenterons une méthode de recalage 3D - 3D originale qui se distingue par la nature de la signature extraite en chaque point et par le critère de similarité utilisé pour mesurer le degré de ressemblance. Le descripteur que nous pr oposons est invariant à la rotation et à la translation et permet également de s’affranchir du problème de la multi - résolution relatif aux données hétérogènes. Dans le but de valider notre approche, nous l’avons testé sur des données synthétiques et nous l’avons appliqué sur des données réelles hétérogènes
    corecore