19,109 research outputs found

    A Minimalist Approach to Type-Agnostic Detection of Quadrics in Point Clouds

    Get PDF
    This paper proposes a segmentation-free, automatic and efficient procedure to detect general geometric quadric forms in point clouds, where clutter and occlusions are inevitable. Our everyday world is dominated by man-made objects which are designed using 3D primitives (such as planes, cones, spheres, cylinders, etc.). These objects are also omnipresent in industrial environments. This gives rise to the possibility of abstracting 3D scenes through primitives, thereby positions these geometric forms as an integral part of perception and high level 3D scene understanding. As opposed to state-of-the-art, where a tailored algorithm treats each primitive type separately, we propose to encapsulate all types in a single robust detection procedure. At the center of our approach lies a closed form 3D quadric fit, operating in both primal & dual spaces and requiring as low as 4 oriented-points. Around this fit, we design a novel, local null-space voting strategy to reduce the 4-point case to 3. Voting is coupled with the famous RANSAC and makes our algorithm orders of magnitude faster than its conventional counterparts. This is the first method capable of performing a generic cross-type multi-object primitive detection in difficult scenes. Results on synthetic and real datasets support the validity of our method.Comment: Accepted for publication at CVPR 201

    Source Reconstruction as an Inverse Problem

    Get PDF
    Inverse Problem techniques offer powerful tools which deal naturally with marginal data and asymmetric or strongly smoothing kernels, in cases where parameter-fitting methods may be used only with some caution. Although they are typically subject to some bias, they can invert data without requiring one to assume a particular model for the source. The Backus-Gilbert method in particular concentrates on the tradeoff between resolution and stability, and allows one to select an optimal compromise between them. We use these tools to analyse the problem of reconstructing features of the source star in a microlensing event, show that it should be possible to obtain useful information about the star with reasonably obtainable data, and note that the quality of the reconstruction is more sensitive to the number of data points than to the quality of individual ones.Comment: 8 pages, 3 figures. To be published in "Microlensing 2000, A New Era of Microlensing Astrophysics", eds., J.W. Menzies and P.D. Sackett, ASP Conference Serie

    Restoration of the cantilever bowing distortion in Atomic Force Microscopy

    Get PDF
    Due to the mechanics of the Atomic Force Microscope (AFM), there is a curvature distortion (bowing effect) present in the acquired images. At present, flattening such images requires human intervention to manually segment object data from the background, which is time consuming and highly inaccurate. In this paper, an automated algorithm to flatten lines from AFM images is presented. The proposed method classifies the data into objects and background, and fits convex lines in an iterative fashion. Results on real images from DNA wrapped carbon nanotubes (DNACNTs) and synthetic experiments are presented, demonstrating the effectiveness of the proposed algorithm in increasing the resolution of the surface topography. In addition a link between the flattening problem and MRI inhomogeneity (shading) is given and the proposed method is compared to an entropy based MRI inhomogeniety correction method

    Piecewise algebraic surface computation and fairing from a discrete model

    Get PDF
    This paper describes a constrained fairing method for implicit surfaces defined on a voxelization. This method is suitable for computing a closed smooth surface that approximates an initial set of face connected voxels.Preprin

    Sampling and Reconstruction of Shapes with Algebraic Boundaries

    Get PDF
    We present a sampling theory for a class of binary images with finite rate of innovation (FRI). Every image in our model is the restriction of \mathds{1}_{\{p\leq0\}} to the image plane, where \mathds{1} denotes the indicator function and pp is some real bivariate polynomial. This particularly means that the boundaries in the image form a subset of an algebraic curve with the implicit polynomial pp. We show that the image parameters --i.e., the polynomial coefficients-- satisfy a set of linear annihilation equations with the coefficients being the image moments. The inherent sensitivity of the moments to noise makes the reconstruction process numerically unstable and narrows the choice of the sampling kernels to polynomial reproducing kernels. As a remedy to these problems, we replace conventional moments with more stable \emph{generalized moments} that are adjusted to the given sampling kernel. The benefits are threefold: (1) it relaxes the requirements on the sampling kernels, (2) produces annihilation equations that are robust at numerical precision, and (3) extends the results to images with unbounded boundaries. We further reduce the sensitivity of the reconstruction process to noise by taking into account the sign of the polynomial at certain points, and sequentially enforcing measurement consistency. We consider various numerical experiments to demonstrate the performance of our algorithm in reconstructing binary images, including low to moderate noise levels and a range of realistic sampling kernels.Comment: 12 pages, 14 figure
    • 

    corecore