1,257 research outputs found

    Multiscale 3D Shape Analysis using Spherical Wavelets

    Get PDF
    Š2005 Springer. The original publication is available at www.springerlink.com: http://dx.doi.org/10.1007/11566489_57DOI: 10.1007/11566489_57Shape priors attempt to represent biological variations within a population. When variations are global, Principal Component Analysis (PCA) can be used to learn major modes of variation, even from a limited training set. However, when significant local variations exist, PCA typically cannot represent such variations from a small training set. To address this issue, we present a novel algorithm that learns shape variations from data at multiple scales and locations using spherical wavelets and spectral graph partitioning. Our results show that when the training set is small, our algorithm significantly improves the approximation of shapes in a testing set over PCA, which tends to oversmooth data

    Optimal designs for 3D shape analysis with spherical harmonic descriptors

    Get PDF
    We determine optimal designs for some regression models which are frequently used for describing 3D shapes. These models are based on a Fourier expansion of a function defined on the unit sphere in terms of spherical harmonic basis functions. In particular it is demonstrated that the uniform distribution on the sphere is optimal with respect to all p-criteria proposed by Kiefer (1974) and also optimal with respect to a criterion which maximizes a p-mean of the r smallest eigenvalues of the variance-covariance matrix. This criterion is related to principal component analysis, which is the common tool for analyzing this type of image data. Moreover, discrete designs on the sphere are derived, which yield the same information matrix in the spherical harmonic regression model as the uniform distribution and are therefore directly implementable in practice. It is demonstrated that the new designs are substantially more efficient than the commonly used designs in 3D-shape analysis. --Shape analysis,spherical harmonic descriptors,optimal designs,quadrature formulas,principal component analysis,3D-image data

    Region-based saliency estimation for 3D shape analysis and understanding

    Get PDF
    The detection of salient regions is an important pre-processing step for many 3D shape analysis and understanding tasks. This paper proposes a novel method for saliency detection in 3D free form shapes. Firstly, we smooth the surface normals by a bilateral filter. Such a method is capable of smoothing the surfaces and retaining the local details. Secondly, a novel method is proposed for the estimation of the saliency value of each vertex. To this end, two new features are defined: Retinex-based Importance Feature (RIF) and Relative Normal Distance (RND). They are based on the human visual perception characteristics and surface geometry respectively. Since the vertex based method cannot guarantee that the detected salient regions are semantically continuous and complete, we propose to refine such values based on surface patches. The detected saliency is finally used to guide the existing techniques for mesh simplification, interest point detection, and overlapping point cloud registration. The comparative studies based on real data from three publicly accessible databases show that the proposed method usually outperforms five selected state of the art ones both qualitatively and quantitatively for saliency detection and 3D shape analysis and understanding

    Evaluating 3D Shape Analysis Methods for Robustness to Rotation Invariance

    Full text link
    This paper analyzes the robustness of recent 3D shape descriptors to SO(3) rotations, something that is fundamental to shape modeling. Specifically, we formulate the task of rotated 3D object instance detection. To do so, we consider a database of 3D indoor scenes, where objects occur in different orientations. We benchmark different methods for feature extraction and classification in the context of this task. We systematically contrast different choices in a variety of experimental settings investigating the impact on the performance of different rotation distributions, different degrees of partial observations on the object, and the different levels of difficulty of negative pairs. Our study, on a synthetic dataset of 3D scenes where objects instances occur in different orientations, reveals that deep learning-based rotation invariant methods are effective for relatively easy settings with easy-to-distinguish pairs. However, their performance decreases significantly when the difference in rotations on the input pair is large, or when the degree of observation of input objects is reduced, or the difficulty level of input pair is increased. Finally, we connect feature encodings designed for rotation-invariant methods to 3D geometry that enable them to acquire the property of rotation invariance.Comment: 20th Conference on Robots and Vision (CRV) 202
    • …
    corecore