4,842 research outputs found

    Adaptive Nonlocal Filtering: A Fast Alternative to Anisotropic Diffusion for Image Enhancement

    Full text link
    The goal of many early visual filtering processes is to remove noise while at the same time sharpening contrast. An historical succession of approaches to this problem, starting with the use of simple derivative and smoothing operators, and the subsequent realization of the relationship between scale-space and the isotropic dfffusion equation, has recently resulted in the development of "geometry-driven" dfffusion. Nonlinear and anisotropic diffusion methods, as well as image-driven nonlinear filtering, have provided improved performance relative to the older isotropic and linear diffusion techniques. These techniques, which either explicitly or implicitly make use of kernels whose shape and center are functions of local image structure are too computationally expensive for use in real-time vision applications. In this paper, we show that results which are largely equivalent to those obtained from geometry-driven diffusion can be achieved by a process which is conceptually separated info two very different functions. The first involves the construction of a vector~field of "offsets", defined on a subset of the original image, at which to apply a filter. The offsets are used to displace filters away from boundaries to prevent edge blurring and destruction. The second is the (straightforward) application of the filter itself. The former function is a kind generalized image skeletonization; the latter is conventional image filtering. This formulation leads to results which are qualitatively similar to contemporary nonlinear diffusion methods, but at computation times that are roughly two orders of magnitude faster; allowing applications of this technique to real-time imaging. An additional advantage of this formulation is that it allows existing filter hardware and software implementations to be applied with no modification, since the offset step reduces to an image pixel permutation, or look-up table operation, after application of the filter

    Adaptive pre-filtering techniques for colour image analysis

    Get PDF
    One important step in the process of colour image segmentation is to reduce the errors caused by image noise and local colour inhomogeneities. This can be achieved by filtering the data with a smoothing operator that eliminates the noise and the weak textures. In this regard, the aim of this paper is to evaluate the performance of two image smoothing techniques designed for colour images, namely bilateral filtering for edge preserving smoothing and coupled forward and backward anisotropic diffusion scheme (FAB). Both techniques are non-linear and have the purpose of eliminating the image noise, reduce weak textures and artefacts and improve the coherence of colour information. A quantitative comparison between them will be evaluated and also the ability of such techniques to preserve the edge information will be investigated

    Seismic Fault Preserving Diffusion

    Full text link
    This paper focuses on the denoising and enhancing of 3-D reflection seismic data. We propose a pre-processing step based on a non linear diffusion filtering leading to a better detection of seismic faults. The non linear diffusion approaches are based on the definition of a partial differential equation that allows us to simplify the images without blurring relevant details or discontinuities. Computing the structure tensor which provides information on the local orientation of the geological layers, we propose to drive the diffusion along these layers using a new approach called SFPD (Seismic Fault Preserving Diffusion). In SFPD, the eigenvalues of the tensor are fixed according to a confidence measure that takes into account the regularity of the local seismic structure. Results on both synthesized and real 3-D blocks show the efficiency of the proposed approach.Comment: 10 page

    Real-Time Anisotropic Diffusion using Space-Variant Vision

    Full text link
    Many computer and robot vision applications require multi-scale image analysis. Classically, this has been accomplished through the use of a linear scale-space, which is constructed by convolution of visual input with Gaussian kernels of varying size (scale). This has been shown to be equivalent to the solution of a linear diffusion equation on an infinite domain, as the Gaussian is the Green's function of such a system (Koenderink, 1984). Recently, much work has been focused on the use of a variable conductance function resulting in anisotropic diffusion described by a nonlinear partial differential equation (PDF). The use of anisotropic diffusion with a conductance coefficient which is a decreasing function of the gradient magnitude has been shown to enhance edges, while decreasing some types of noise (Perona and Malik, 1987). Unfortunately, the solution of the anisotropic diffusion equation requires the numerical integration of a nonlinear PDF which is a costly process when carried out on a fixed mesh such as a typical image. In this paper we show that the complex log transformation, variants of which are universally used in mammalian retino-cortical systems, allows the nonlinear diffusion equation to be integrated at exponentially enhanced rates due to the non-uniform mesh spacing inherent in the log domain. The enhanced integration rates, coupled with the intrinsic compression of the complex log transformation, yields a seed increase of between two and three orders of magnitude, providing a means of performing real-time image enhancement using anisotropic diffusion.Office of Naval Research (N00014-95-I-0409
    • …
    corecore