11,019 research outputs found

    Area and Length Minimizing Flows for Shape Segmentation

    Get PDF
    ©1997 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or distribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.Presented at the 1997 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June 17-19, 1997, San Juan, Puerto Rico.DOI: 10.1109/CVPR.1997.609390Several active contour models have been proposed to unify the curve evolution framework with classical energy minimization techniques for segmentation, such as snakes. The essential idea is to evolve a curve (in 20) or a surface (in 30) under constraints from image forces so that it clings to features of interest in an intensity image. Recently the evolution equation has. been derived from first principles as the gradient flow that minimizes a modified length functional, tailored io features such as edges. However, because the flow may be slow to converge in practice, a constant (hyperbolic) term is added to keep the curve/surface moving in the desired direction. In this paper, we provide a justification for this term based on the gradient flow derived from a weighted area functional, with image dependent weighting factor. When combined with the earlier modified length gradient flow we obtain a pde which offers a number of advantages, as illustrated by several examples of shape segmentation on medical images. In many cases the weighted area flow may be used on its own, with significant computational savings

    Behavioral analysis of anisotropic diffusion in image processing

    Get PDF
    ©1996 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or distribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.DOI: 10.1109/83.541424In this paper, we analyze the behavior of the anisotropic diffusion model of Perona and Malik (1990). The main idea is to express the anisotropic diffusion equation as coming from a certain optimization problem, so its behavior can be analyzed based on the shape of the corresponding energy surface. We show that anisotropic diffusion is the steepest descent method for solving an energy minimization problem. It is demonstrated that an anisotropic diffusion is well posed when there exists a unique global minimum for the energy functional and that the ill posedness of a certain anisotropic diffusion is caused by the fact that its energy functional has an infinite number of global minima that are dense in the image space. We give a sufficient condition for an anisotropic diffusion to be well posed and a sufficient and necessary condition for it to be ill posed due to the dense global minima. The mechanism of smoothing and edge enhancement of anisotropic diffusion is illustrated through a particular orthogonal decomposition of the diffusion operator into two parts: one that diffuses tangentially to the edges and therefore acts as an anisotropic smoothing operator, and the other that flows normally to the edges and thus acts as an enhancement operator

    Correcting curvature-density effects in the Hamilton-Jacobi skeleton

    Get PDF
    The Hainilton-Jacobi approach has proven to be a powerful and elegant method for extracting the skeleton of two-dimensional (2-D) shapes. The approach is based on the observation that the normalized flux associated with the inward evolution of the object boundary at nonskeletal points tends to zero as the size of the integration area tends to zero, while the flux is negative at the locations of skeletal points. Nonetheless, the error in calculating the flux on the image lattice is both limited by the pixel resolution and also proportional to the curvature of the boundary evolution front and, hence, unbounded near endpoints. This makes the exact location of endpoints difficult and renders the performance of the skeleton extraction algorithm dependent on a threshold parameter. This problem can be overcome by using interpolation techniques to calculate the flux with subpixel precision. However, here, we develop a method for 2-D skeleton extraction that circumvents the problem by eliminating the curvature contribution to the error. This is done by taking into account variations of density due to boundary curvature. This yields a skeletonization algorithm that gives both better localization and less susceptibility to boundary noise and parameter choice than the Hamilton-Jacobi method

    Spatial Smoothing for Diffusion Tensor Imaging with low Signal to Noise Ratios

    Get PDF
    Though low signal to noise ratio (SNR) experiments in DTI give key information about tracking and anisotropy, e.g. by measurements with very small voxel sizes, due to the complicated impact of thermal noise such experiments are up to now seldom analysed. In this paper Monte Carlo simulations are presented which investigate the random fields of noise for different DTI variables in low SNR situations. Based on this study a strategy for spatial smoothing, which demands essentially uniform noise, is derived. To construct a convenient filter the weights of the nonlinear Aurich chain are adapted to DTI. This edge preserving three dimensional filter is then validated in different variants via a quasi realistic model and is applied to very new data with isotropic voxels of the size 1x1x1 mm3 which correspond to a spatial mean SNR of approximately 3

    Image Segmentation Using Weak Shape Priors

    Full text link
    The problem of image segmentation is known to become particularly challenging in the case of partial occlusion of the object(s) of interest, background clutter, and the presence of strong noise. To overcome this problem, the present paper introduces a novel approach segmentation through the use of "weak" shape priors. Specifically, in the proposed method, an segmenting active contour is constrained to converge to a configuration at which its geometric parameters attain their empirical probability densities closely matching the corresponding model densities that are learned based on training samples. It is shown through numerical experiments that the proposed shape modeling can be regarded as "weak" in the sense that it minimally influences the segmentation, which is allowed to be dominated by data-related forces. On the other hand, the priors provide sufficient constraints to regularize the convergence of segmentation, while requiring substantially smaller training sets to yield less biased results as compared to the case of PCA-based regularization methods. The main advantages of the proposed technique over some existing alternatives is demonstrated in a series of experiments.Comment: 27 pages, 8 figure

    DTI denoising for data with low signal to noise ratios

    Get PDF
    Low signal to noise ratio (SNR) experiments in diffusion tensor imaging (DTI) give key information about tracking and anisotropy, e. g., by measurements with small voxel sizes or with high b values. However, due to the complicated and dominating impact of thermal noise such data are still seldom analysed. In this paper Monte Carlo simulations are presented which investigate the distributions of noise for different DTI variables in low SNR situations. Based on this study a strategy for the application of spatial smoothing is derived. Optimal prerequisites for spatial filters are unbiased, bell shaped distributions with uniform variance, but, only few variables have a statistics close to that. To construct a convenient filter a chain of nonlinear Gaussian filters is adapted to peculiarities of DTI and a bias correction is introduced. This edge preserving three dimensional filter is then validated via a quasi realistic model. Further, it is shown that for small sample sizes the filter is as effective as a maximum likelihood estimator and produces reliable results down to a local SNR of approximately 1. The filter is finally applied to very recent data with isotropic voxels of the size 1×1×1mm^3 which corresponds to a spatially mean SNR of 2.5. This application demonstrates the statistical robustness of the filter method. Though the Rician noise model is only approximately realized in the data, the gain of information by spatial smoothing is considerable

    Automatic segmentation of the left ventricle cavity and myocardium in MRI data

    Get PDF
    A novel approach for the automatic segmentation has been developed to extract the epi-cardium and endo-cardium boundaries of the left ventricle (lv) of the heart. The developed segmentation scheme takes multi-slice and multi-phase magnetic resonance (MR) images of the heart, transversing the short-axis length from the base to the apex. Each image is taken at one instance in the heart's phase. The images are segmented using a diffusion-based filter followed by an unsupervised clustering technique and the resulting labels are checked to locate the (lv) cavity. From cardiac anatomy, the closest pool of blood to the lv cavity is the right ventricle cavity. The wall between these two blood-pools (interventricular septum) is measured to give an approximate thickness for the myocardium. This value is used when a radial search is performed on a gradient image to find appropriate robust segments of the epi-cardium boundary. The robust edge segments are then joined using a normal spline curve. Experimental results are presented with very encouraging qualitative and quantitative results and a comparison is made against the state-of-the art level-sets method

    Robust Feature Detection and Local Classification for Surfaces Based on Moment Analysis

    Get PDF
    The stable local classification of discrete surfaces with respect to features such as edges and corners or concave and convex regions, respectively, is as quite difficult as well as indispensable for many surface processing applications. Usually, the feature detection is done via a local curvature analysis. If concerned with large triangular and irregular grids, e.g., generated via a marching cube algorithm, the detectors are tedious to treat and a robust classification is hard to achieve. Here, a local classification method on surfaces is presented which avoids the evaluation of discretized curvature quantities. Moreover, it provides an indicator for smoothness of a given discrete surface and comes together with a built-in multiscale. The proposed classification tool is based on local zero and first moments on the discrete surface. The corresponding integral quantities are stable to compute and they give less noisy results compared to discrete curvature quantities. The stencil width for the integration of the moments turns out to be the scale parameter. Prospective surface processing applications are the segmentation on surfaces, surface comparison, and matching and surface modeling. Here, a method for feature preserving fairing of surfaces is discussed to underline the applicability of the presented approach.
    corecore