1,993 research outputs found

    Compression for Smooth Shape Analysis

    Full text link
    Most 3D shape analysis methods use triangular meshes to discretize both the shape and functions on it as piecewise linear functions. With this representation, shape analysis requires fine meshes to represent smooth shapes and geometric operators like normals, curvatures, or Laplace-Beltrami eigenfunctions at large computational and memory costs. We avoid this bottleneck with a compression technique that represents a smooth shape as subdivision surfaces and exploits the subdivision scheme to parametrize smooth functions on that shape with a few control parameters. This compression does not affect the accuracy of the Laplace-Beltrami operator and its eigenfunctions and allow us to compute shape descriptors and shape matchings at an accuracy comparable to triangular meshes but a fraction of the computational cost. Our framework can also compress surfaces represented by point clouds to do shape analysis of 3D scanning data

    Feature preserving smoothing of 3D surface scans

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2004.Includes bibliographical references (p. 63-70).With the increasing use of geometry scanners to create 3D models, there is a rising need for effective denoising of data captured with these devices. This thesis presents new methods for smoothing scanned data, based on extensions of the bilateral filter to 3D. The bilateral filter is a non-linear, edge-preserving image filter; its extension to 3D leads to an efficient, feature preserving filter for a wide class of surface representations, including points and "polygon soups."by Thouis Raymond Jones.S.M

    Edge Detection by Cost Minimization

    Get PDF
    Edge detection is cast as a problem in cost minimization. This is achieved by the formulation of two cost functions which evaluate the quality of edge configurations. The first is a comparative cost function (CCF), which is a linear sum of weighted cost factors. It is heuristic in nature and can be applied only to pairs of similar edge configurations. It measures the relative quality between the configurations. The detection of edges is accomplished by a heuristic iterative search algorithm which uses the CCF to evaluate edge quality. The second cost function is the absolute cost function (ACF), which is also a linear sum of weighted cost factors. The cost factors capture desirable characteristics of edges such as accuracy in localization, thinness, and continuity. Edges are detected by finding the edge configurations that minimize the ACF. We have analyzed the function in terms of the characteristics of the edges in minimum cost configurations. These characteristics are directly dependent of the associated weight of each cost factor. Through the analysis of the ACF, we provide guidelines on the choice of weights to achieve certain characteristics of the detected edges. Minimizing the ACF is accomplished by the use of Simulated Annealing. We have developed a set of strategies for generating next states for the annealing process. The method of generating next states allows the annealing process to be executed largely in parallel. Experimental results are shown which verify the usefulness of the CCF and ACF techniques for edge detection. In comparison, the ACF technique produces better edges than the CCF or other current detection techniques

    Vessel Tree Reconstruction with Divergence Prior

    Get PDF
    Accurate structure analysis of high-resolution 3D biomedical images of vessels is a challenging issue and in demand for medical diagnosis nowadays. Previous curvature regularization based methods [10, 31] give promising results. However, their mathematical models are not designed for bifurcations and generate significant artifacts in such areas. To address the issue, we propose a new geometric regularization principle for reconstructing vector fields based on prior knowledge about their divergence. In our work, we focus on vector fields modeling blood flow pattern that should be divergent in arteries and convergent in veins. We show that this previously ignored regularization constraint can significantly improve the quality of vessel tree reconstruction particularly around bifurcations where non-zero divergence is concentrated. Our divergence prior is critical for resolving (binary) sign ambiguity in flow orientations produced by standard vessel filters, e:g: Frangi. Our vessel tree centerline reconstruction combines divergence constraints with robust curvature regularization. Our unsupervised method can reconstruct complete vessel trees with near-capillary details on both synthetic and real 3D volumes. Also, our method reduces angular reconstruction errors at bifurcations by a factor of two

    2D and 3D surface image processing algorithms and their applications

    Get PDF
    This doctoral dissertation work aims to develop algorithms for 2D image segmentation application of solar filament disappearance detection, 3D mesh simplification, and 3D image warping in pre-surgery simulation. Filament area detection in solar images is an image segmentation problem. A thresholding and region growing combined method is proposed and applied in this application. Based on the filament area detection results, filament disappearances are reported in real time. The solar images in 1999 are processed with this proposed system and three statistical results of filaments are presented. 3D images can be obtained by passive and active range sensing. An image registration process finds the transformation between each pair of range views. To model an object, a common reference frame in which all views can be transformed must be defined. After the registration, the range views should be integrated into a non-redundant model. Optimization is necessary to obtain a complete 3D model. One single surface representation can better fit to the data. It may be further simplified for rendering, storing and transmitting efficiently, or the representation can be converted to some other formats. This work proposes an efficient algorithm for solving the mesh simplification problem, approximating an arbitrary mesh by a simplified mesh. The algorithm uses Root Mean Square distance error metric to decide the facet curvature. Two vertices of one edge and the surrounding vertices decide the average plane. The simplification results are excellent and the computation speed is fast. The algorithm is compared with six other major simplification algorithms. Image morphing is used for all methods that gradually and continuously deform a source image into a target image, while producing the in-between models. Image warping is a continuous deformation of a: graphical object. A morphing process is usually composed of warping and interpolation. This work develops a direct-manipulation-of-free-form-deformation-based method and application for pre-surgical planning. The developed user interface provides a friendly interactive tool in the plastic surgery. Nose augmentation surgery is presented as an example. Displacement vector and lattices resulting in different resolution are used to obtain various deformation results. During the deformation, the volume change of the model is also considered based on a simplified skin-muscle model

    Discrete Optimization in Early Vision - Model Tractability Versus Fidelity

    Get PDF
    Early vision is the process occurring before any semantic interpretation of an image takes place. Motion estimation, object segmentation and detection are all parts of early vision, but recognition is not. Some models in early vision are easy to perform inference with---they are tractable. Others describe the reality well---they have high fidelity. This thesis improves the tractability-fidelity trade-off of the current state of the art by introducing new discrete methods for image segmentation and other problems of early vision. The first part studies pseudo-boolean optimization, both from a theoretical perspective as well as a practical one by introducing new algorithms. The main result is the generalization of the roof duality concept to polynomials of higher degree than two. Another focus is parallelization; discrete optimization methods for multi-core processors, computer clusters, and graphical processing units are presented. Remaining in an image segmentation context, the second part studies parametric problems where a set of model parameters and a segmentation are estimated simultaneously. For a small number of parameters these problems can still be optimally solved. One application is an optimal method for solving the two-phase Mumford-Shah functional. The third part shifts the focus to curvature regularization---where the commonly used length and area penalization is replaced by curvature in two and three dimensions. These problems can be discretized over a mesh and special attention is given to the mesh geometry. Specifically, hexagonal meshes in the plane are compared to square ones and a method for generating adaptive meshes is introduced and evaluated. The framework is then extended to curvature regularization of surfaces. Finally, the thesis is concluded by three applications to early vision problems: cardiac MRI segmentation, image registration, and cell classification
    corecore