584 research outputs found

    Multiplicative Noise Removal Using Variable Splitting and Constrained Optimization

    Full text link
    Multiplicative noise (also known as speckle noise) models are central to the study of coherent imaging systems, such as synthetic aperture radar and sonar, and ultrasound and laser imaging. These models introduce two additional layers of difficulties with respect to the standard Gaussian additive noise scenario: (1) the noise is multiplied by (rather than added to) the original image; (2) the noise is not Gaussian, with Rayleigh and Gamma being commonly used densities. These two features of multiplicative noise models preclude the direct application of most state-of-the-art algorithms, which are designed for solving unconstrained optimization problems where the objective has two terms: a quadratic data term (log-likelihood), reflecting the additive and Gaussian nature of the noise, plus a convex (possibly nonsmooth) regularizer (e.g., a total variation or wavelet-based regularizer/prior). In this paper, we address these difficulties by: (1) converting the multiplicative model into an additive one by taking logarithms, as proposed by some other authors; (2) using variable splitting to obtain an equivalent constrained problem; and (3) dealing with this optimization problem using the augmented Lagrangian framework. A set of experiments shows that the proposed method, which we name MIDAL (multiplicative image denoising by augmented Lagrangian), yields state-of-the-art results both in terms of speed and denoising performance.Comment: 11 pages, 7 figures, 2 tables. To appear in the IEEE Transactions on Image Processing

    Line-Field Based Adaptive Image Model for Blind Deblurring

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Sparsity driven ultrasound imaging

    Get PDF
    An image formation framework for ultrasound imaging from synthetic transducer arrays based on sparsity-driven regularization functionals using single-frequency Fourier domain data is proposed. The framework involves the use of a physics-based forward model of the ultrasound observation process, the formulation of image formation as the solution of an associated optimization problem, and the solution of that problem through efficient numerical algorithms. The sparsity-driven, model-based approach estimates a complex-valued reflectivity field and preserves physical features in the scene while suppressing spurious artifacts. It also provides robust reconstructions in the case of sparse and reduced observation apertures. The effectiveness of the proposed imaging strategy is demonstrated using experimental data

    A Tutorial on Speckle Reduction in Synthetic Aperture Radar Images

    Get PDF
    Speckle is a granular disturbance, usually modeled as a multiplicative noise, that affects synthetic aperture radar (SAR) images, as well as all coherent images. Over the last three decades, several methods have been proposed for the reduction of speckle, or despeckling, in SAR images. Goal of this paper is making a comprehensive review of despeckling methods since their birth, over thirty years ago, highlighting trends and changing approaches over years. The concept of fully developed speckle is explained. Drawbacks of homomorphic filtering are pointed out. Assets of multiresolution despeckling, as opposite to spatial-domain despeckling, are highlighted. Also advantages of undecimated, or stationary, wavelet transforms over decimated ones are discussed. Bayesian estimators and probability density function (pdf) models in both spatial and multiresolution domains are reviewed. Scale-space varying pdf models, as opposite to scale varying models, are promoted. Promising methods following non-Bayesian approaches, like nonlocal (NL) filtering and total variation (TV) regularization, are reviewed and compared to spatial- and wavelet-domain Bayesian filters. Both established and new trends for assessment of despeckling are presented. A few experiments on simulated data and real COSMO-SkyMed SAR images highlight, on one side the costperformance tradeoff of the different methods, on the other side the effectiveness of solutions purposely designed for SAR heterogeneity and not fully developed speckle. Eventually, upcoming methods based on new concepts of signal processing, like compressive sensing, are foreseen as a new generation of despeckling, after spatial-domain and multiresolution-domain method

    Bayesian super-resolution with application to radar target recognition

    Get PDF
    This thesis is concerned with methods to facilitate automatic target recognition using images generated from a group of associated radar systems. Target recognition algorithms require access to a database of previously recorded or synthesized radar images for the targets of interest, or a database of features based on those images. However, the resolution of a new image acquired under non-ideal conditions may not be as good as that of the images used to generate the database. Therefore it is proposed to use super-resolution techniques to match the resolution of new images with the resolution of database images. A comprehensive review of the literature is given for super-resolution when used either on its own, or in conjunction with target recognition. A new superresolution algorithm is developed that is based on numerical Markov chain Monte Carlo Bayesian statistics. This algorithm allows uncertainty in the superresolved image to be taken into account in the target recognition process. It is shown that the Bayesian approach improves the probability of correct target classification over standard super-resolution techniques. The new super-resolution algorithm is demonstrated using a simple synthetically generated data set and is compared to other similar algorithms. A variety of effects that degrade super-resolution performance, such as defocus, are analyzed and techniques to compensate for these are presented. Performance of the super-resolution algorithm is then tested as part of a Bayesian target recognition framework using measured radar data

    2D Phase Unwrapping via Graph Cuts

    Get PDF
    Phase imaging technologies such as interferometric synthetic aperture radar (InSAR), magnetic resonance imaging (MRI), or optical interferometry, are nowadays widespread and with an increasing usage. The so-called phase unwrapping, which consists in the in- ference of the absolute phase from the modulo-2π phase, is a critical step in many of their processing chains, yet still one of its most challenging problems. We introduce an en- ergy minimization based approach to 2D phase unwrapping. In this approach we address the problem by adopting a Bayesian point of view and a Markov random field (MRF) to model the phase. The maximum a posteriori estimation of the absolute phase gives rise to an integer optimization problem, for which we introduce a family of efficient algo- rithms based on existing graph cuts techniques. We term our approach and algorithms PUMA, for Phase Unwrapping MAx flow. As long as the prior potential of the MRF is convex, PUMA guarantees an exact global solution. In particular it solves exactly all the minimum L p norm (p ≥ 1) phase unwrapping problems, unifying in that sense, a set of existing independent algorithms. For non convex potentials we introduce a version of PUMA that, while yielding only approximate solutions, gives very useful phase unwrap- ping results. The main characteristic of the introduced solutions is the ability to blindly preserve discontinuities. Extending the previous versions of PUMA, we tackle denoising by exploiting a multi-precision idea, which allows us to use the same rationale both for phase unwrapping and denoising. Finally, the last presented version of PUMA uses a frequency diversity concept to unwrap phase images having large phase rates. A representative set of experiences illustrates the performance of PUMA

    Information Extraction and Modeling from Remote Sensing Images: Application to the Enhancement of Digital Elevation Models

    Get PDF
    To deal with high complexity data such as remote sensing images presenting metric resolution over large areas, an innovative, fast and robust image processing system is presented. The modeling of increasing level of information is used to extract, represent and link image features to semantic content. The potential of the proposed techniques is demonstrated with an application to enhance and regularize digital elevation models based on information collected from RS images

    InSAR phase analysis: Phase unwrapping for noisy SAR interferograms

    Get PDF

    A Stochastic Modeling Approach to Region-and Edge-Based Image Segmentation

    Get PDF
    The purpose of image segmentation is to isolate objects in a scene from the background. This is a very important step in any computer vision system since various tasks, such as shape analysis and object recognition, require accurate image segmentation. Image segmentation can also produce tremendous data reduction. Edge-based and region-based segmentation have been examined and two new algorithms based on recent results in random field theory have been developed. The edge-based segmentation algorithm uses the pixel gray level intensity information to allocate object boundaries in two stages: edge enhancement, followed by edge linking. Edge enhancement is accomplished by maximum energy filters used in one-dimensional bandlimited signal analysis. The issue of optimum filter spatial support is analyzed for ideal edge models. Edge linking is performed by quantitative sequential search using the Stack algorithm. Two probabilistic search metrics are introduced and their optimality is proven and demonstrated on test as well as real scenes. Compared to other methods, this algorithm is shown to produce more accurate allocation of object boundaries. Region-based segmentation was modeled as a MAP estimation problem in which the actual (unknown) objects were estimated from the observed (known) image by a recursive classification algorithms. The observed image was modeled by an Autoregressive (AR) model whose parameters were estimated locally, and a Gibbs-Markov random field (GMRF) model was used to model the unknown scene. A computational study was conducted on images having various types of texture images. The issues of parameter estimation, neighborhood selection, and model orders were examined. It is concluded that the MAP approach for region segmentation generally works well on images having a large content of microtextures which can be properly modeled by both AR and GMRF models. On these texture images, second order AR and GMRF models were shown to be adequate
    corecore