16,956 research outputs found

    Adaptive Nonlocal Filtering: A Fast Alternative to Anisotropic Diffusion for Image Enhancement

    Full text link
    The goal of many early visual filtering processes is to remove noise while at the same time sharpening contrast. An historical succession of approaches to this problem, starting with the use of simple derivative and smoothing operators, and the subsequent realization of the relationship between scale-space and the isotropic dfffusion equation, has recently resulted in the development of "geometry-driven" dfffusion. Nonlinear and anisotropic diffusion methods, as well as image-driven nonlinear filtering, have provided improved performance relative to the older isotropic and linear diffusion techniques. These techniques, which either explicitly or implicitly make use of kernels whose shape and center are functions of local image structure are too computationally expensive for use in real-time vision applications. In this paper, we show that results which are largely equivalent to those obtained from geometry-driven diffusion can be achieved by a process which is conceptually separated info two very different functions. The first involves the construction of a vector~field of "offsets", defined on a subset of the original image, at which to apply a filter. The offsets are used to displace filters away from boundaries to prevent edge blurring and destruction. The second is the (straightforward) application of the filter itself. The former function is a kind generalized image skeletonization; the latter is conventional image filtering. This formulation leads to results which are qualitatively similar to contemporary nonlinear diffusion methods, but at computation times that are roughly two orders of magnitude faster; allowing applications of this technique to real-time imaging. An additional advantage of this formulation is that it allows existing filter hardware and software implementations to be applied with no modification, since the offset step reduces to an image pixel permutation, or look-up table operation, after application of the filter

    Seismic Fault Preserving Diffusion

    Full text link
    This paper focuses on the denoising and enhancing of 3-D reflection seismic data. We propose a pre-processing step based on a non linear diffusion filtering leading to a better detection of seismic faults. The non linear diffusion approaches are based on the definition of a partial differential equation that allows us to simplify the images without blurring relevant details or discontinuities. Computing the structure tensor which provides information on the local orientation of the geological layers, we propose to drive the diffusion along these layers using a new approach called SFPD (Seismic Fault Preserving Diffusion). In SFPD, the eigenvalues of the tensor are fixed according to a confidence measure that takes into account the regularity of the local seismic structure. Results on both synthesized and real 3-D blocks show the efficiency of the proposed approach.Comment: 10 page

    Coherence Filtering to Enhance the Mandibular Canal in Cone-Beam CT data

    Get PDF
    Segmenting the mandibular canal from cone beam CT data, is difficult due to low edge contrast and high image noise. We introduce 3D coherence filtering as a method to close the interrupted edges and denoise the structure of the mandibular canal. Coherence Filtering is an anisotropic non-linear tensor based diffusion algorithm for edge enhancing image filtering. We test different numerical schemes of the tensor diffusion equation, non-negative, standard discretization and also a rotation invariant scheme of Weickert [1]. Only the\ud scheme of Weickert did not blur the high spherical images frequencies on the image diagonals of our test volume. Thus this scheme is chosen to enhance the small curved mandibular canal structure. The best choice of the diffusion equation parameters c1 and c2, depends on the image noise. Coherence filtering on the CBCT-scan works well, the noise in the mandibular canal is gone and the edges are connected. Because the algorithm is tensor based it cannot deal with edge joints or splits, thus is less fit for more complex image structures

    Real-Time Anisotropic Diffusion using Space-Variant Vision

    Full text link
    Many computer and robot vision applications require multi-scale image analysis. Classically, this has been accomplished through the use of a linear scale-space, which is constructed by convolution of visual input with Gaussian kernels of varying size (scale). This has been shown to be equivalent to the solution of a linear diffusion equation on an infinite domain, as the Gaussian is the Green's function of such a system (Koenderink, 1984). Recently, much work has been focused on the use of a variable conductance function resulting in anisotropic diffusion described by a nonlinear partial differential equation (PDF). The use of anisotropic diffusion with a conductance coefficient which is a decreasing function of the gradient magnitude has been shown to enhance edges, while decreasing some types of noise (Perona and Malik, 1987). Unfortunately, the solution of the anisotropic diffusion equation requires the numerical integration of a nonlinear PDF which is a costly process when carried out on a fixed mesh such as a typical image. In this paper we show that the complex log transformation, variants of which are universally used in mammalian retino-cortical systems, allows the nonlinear diffusion equation to be integrated at exponentially enhanced rates due to the non-uniform mesh spacing inherent in the log domain. The enhanced integration rates, coupled with the intrinsic compression of the complex log transformation, yields a seed increase of between two and three orders of magnitude, providing a means of performing real-time image enhancement using anisotropic diffusion.Office of Naval Research (N00014-95-I-0409

    Improving Fiber Alignment in HARDI by Combining Contextual PDE Flow with Constrained Spherical Deconvolution

    Get PDF
    We propose two strategies to improve the quality of tractography results computed from diffusion weighted magnetic resonance imaging (DW-MRI) data. Both methods are based on the same PDE framework, defined in the coupled space of positions and orientations, associated with a stochastic process describing the enhancement of elongated structures while preserving crossing structures. In the first method we use the enhancement PDE for contextual regularization of a fiber orientation distribution (FOD) that is obtained on individual voxels from high angular resolution diffusion imaging (HARDI) data via constrained spherical deconvolution (CSD). Thereby we improve the FOD as input for subsequent tractography. Secondly, we introduce the fiber to bundle coherence (FBC), a measure for quantification of fiber alignment. The FBC is computed from a tractography result using the same PDE framework and provides a criterion for removing the spurious fibers. We validate the proposed combination of CSD and enhancement on phantom data and on human data, acquired with different scanning protocols. On the phantom data we find that PDE enhancements improve both local metrics and global metrics of tractography results, compared to CSD without enhancements. On the human data we show that the enhancements allow for a better reconstruction of crossing fiber bundles and they reduce the variability of the tractography output with respect to the acquisition parameters. Finally, we show that both the enhancement of the FODs and the use of the FBC measure on the tractography improve the stability with respect to different stochastic realizations of probabilistic tractography. This is shown in a clinical application: the reconstruction of the optic radiation for epilepsy surgery planning
    corecore