776 research outputs found

    Multiscale Fields of Patterns

    Full text link
    We describe a framework for defining high-order image models that can be used in a variety of applications. The approach involves modeling local patterns in a multiscale representation of an image. Local properties of a coarsened image reflect non-local properties of the original image. In the case of binary images local properties are defined by the binary patterns observed over small neighborhoods around each pixel. With the multiscale representation we capture the frequency of patterns observed at different scales of resolution. This framework leads to expressive priors that depend on a relatively small number of parameters. For inference and learning we use an MCMC method for block sampling with very large blocks. We evaluate the approach with two example applications. One involves contour detection. The other involves binary segmentation.Comment: In NIPS 201

    Gray-level Texture Characterization Based on a New Adaptive

    Get PDF
    In this paper, we propose a new nonlinear exponential adaptive two-dimensional (2-D) filter for texture characterization. The filter coefficients are updated with the Least Mean Square (LMS) algorithm. The proposed nonlinear model is used for texture characterization with a 2-D Auto-Regressive (AR) adaptive model. The main advantage of the new nonlinear exponential adaptive 2-D filter is the reduced number of coefficients used to characterize the nonlinear parametric models of images regarding the 2-D second-order Volterra model. Whatever the degree of the non-linearity, the problem results in the same number of coefficients as in the linear case. The characterization efficiency of the proposed exponential model is compared to the one provided by both 2-D linear and Volterra filters and the cooccurrence matrix method. The comparison is based on two criteria usually used to evaluate the features discriminating ability and the class quantification. Extensive experiments proved that the exponential model coefficients give better results in texture discrimination than several other parametric features even in a noisy context

    Gray-level Texture Characterization Based on a New Adaptive Nonlinear Auto-Regressive Filter

    Get PDF
    In this paper, we propose a new nonlinear exponential adaptive two-dimensional (2-D) filter for texture characterization. The filter coefficients are updated with the Least Mean Square (LMS) algorithm. The proposed nonlinear model is used for texture characterization with a 2-D Auto-Regressive (AR) adaptive model. The main advantage of the new nonlinear exponential adaptive 2-D filter is the reduced number of coefficients used to characterize the nonlinear parametric models of images regarding the 2-D second-order Volterra model. Whatever the degree of the non-linearity, the problem results in the same number of coefficients as in the linear case. The characterization efficiency of the proposed exponential model is compared to the one provided by both 2-D linear and Volterra filters and the cooccurrence matrix method. The comparison is based on two criteria usually used to evaluate the features discriminating ability and the class quantification. Extensive experiments proved that the exponential model coefficients give better results in texture discrimination than several other parametric features even in a noisy context

    Multiresolution image models and estimation techniques

    Get PDF

    Query of image content using Wavelets and Gibbs-Markov Random Fields

    Get PDF
    The central theme of this thesis is the application of Wavelets and Random Processes to content-based image query (on texture patterns, in particular). Given a query image, a content-based search extracts a certain representative measure (or signature) from the query image and likewise for all the target images in the search archive. A good representative measure is one that provides us with the ability to differentiate easily between different patterns. A distance measure is computed between the query properties and the properties of each of the target images. The lowest distance measure gives us the best target match for the particular query. Typically, the measure extraction on the target archive is performed as a pre-processing step. The thesis features two different methods of measure extraction. The first one is a wavelet based measure extraction method. It builds upon a previously documented method, but adds subtle modifications to it so that it now lends much much more effectiveness to pattern matching on texture patterns and on images of unequal sizes. The modified algorithm as well as the mathematics behind it is presented. The second method uses a Markov Random Field to model the texture properties of regions within an image. The parameters of the model serve as the texture measure or signature. Wavelet-based multiresolution is then used to speed up the search. The theory of Markov Random Fields, their equivalence with Gibbs Random Fields, the Hammerseley-Clifford theorem and parameter estimation techniques are presented. In addition to pattern matching these texture signatures have also be used for controlled image smoothing and texture generation. The results from both methods are encouraging. One hopes that these methods find widespread use in image query applications

    A Tutorial on Speckle Reduction in Synthetic Aperture Radar Images

    Get PDF
    Speckle is a granular disturbance, usually modeled as a multiplicative noise, that affects synthetic aperture radar (SAR) images, as well as all coherent images. Over the last three decades, several methods have been proposed for the reduction of speckle, or despeckling, in SAR images. Goal of this paper is making a comprehensive review of despeckling methods since their birth, over thirty years ago, highlighting trends and changing approaches over years. The concept of fully developed speckle is explained. Drawbacks of homomorphic filtering are pointed out. Assets of multiresolution despeckling, as opposite to spatial-domain despeckling, are highlighted. Also advantages of undecimated, or stationary, wavelet transforms over decimated ones are discussed. Bayesian estimators and probability density function (pdf) models in both spatial and multiresolution domains are reviewed. Scale-space varying pdf models, as opposite to scale varying models, are promoted. Promising methods following non-Bayesian approaches, like nonlocal (NL) filtering and total variation (TV) regularization, are reviewed and compared to spatial- and wavelet-domain Bayesian filters. Both established and new trends for assessment of despeckling are presented. A few experiments on simulated data and real COSMO-SkyMed SAR images highlight, on one side the costperformance tradeoff of the different methods, on the other side the effectiveness of solutions purposely designed for SAR heterogeneity and not fully developed speckle. Eventually, upcoming methods based on new concepts of signal processing, like compressive sensing, are foreseen as a new generation of despeckling, after spatial-domain and multiresolution-domain method

    Locally adaptive image denoising by a statistical multiresolution criterion

    Full text link
    We demonstrate how one can choose the smoothing parameter in image denoising by a statistical multiresolution criterion, both globally and locally. Using inhomogeneous diffusion and total variation regularization as examples for localized regularization schemes, we present an efficient method for locally adaptive image denoising. As expected, the smoothing parameter serves as an edge detector in this framework. Numerical examples illustrate the usefulness of our approach. We also present an application in confocal microscopy
    • …
    corecore