717 research outputs found

    Local Behavior of Sparse Analysis Regularization: Applications to Risk Estimation

    Full text link
    In this paper, we aim at recovering an unknown signal x0 from noisy L1measurements y=Phi*x0+w, where Phi is an ill-conditioned or singular linear operator and w accounts for some noise. To regularize such an ill-posed inverse problem, we impose an analysis sparsity prior. More precisely, the recovery is cast as a convex optimization program where the objective is the sum of a quadratic data fidelity term and a regularization term formed of the L1-norm of the correlations between the sought after signal and atoms in a given (generally overcomplete) dictionary. The L1-sparsity analysis prior is weighted by a regularization parameter lambda>0. In this paper, we prove that any minimizers of this problem is a piecewise-affine function of the observations y and the regularization parameter lambda. As a byproduct, we exploit these properties to get an objectively guided choice of lambda. In particular, we develop an extension of the Generalized Stein Unbiased Risk Estimator (GSURE) and show that it is an unbiased and reliable estimator of an appropriately defined risk. The latter encompasses special cases such as the prediction risk, the projection risk and the estimation risk. We apply these risk estimators to the special case of L1-sparsity analysis regularization. We also discuss implementation issues and propose fast algorithms to solve the L1 analysis minimization problem and to compute the associated GSURE. We finally illustrate the applicability of our framework to parameter(s) selection on several imaging problems

    Full Wave Form Inversion for Seismic Data

    Get PDF
    In seismic wave inversion, seismic waves are sent into the ground and then observed at many receiving points with the aim of producing high-resolution images of the geological underground details. The challenge presented by Saudi Aramco is to solve the inverse problem for multiple point sources on the full elastic wave equation, taking into account all frequencies for the best resolution. The state-of-the-art methods use optimisation to find the seismic properties of the rocks, such that when used as the coefficients of the equations of a model, the measurements are reproduced as closely as possible. This process requires regularisation if one is to avoid instability. The approach can produce a realistic image but does not account for uncertainty arising, in general, from the existence of many different patterns of properties that also reproduce the measurements. In the Study Group a formulation of the problem was developed, based upon the principles of Bayesian statistics. First the state-of-the-art optimisation method was shown to be a special case of the Bayesian formulation. This result immediately provides insight into the most appropriate regularisation methods. Then a practical implementation of a sequential sampling algorithm, using forms of the Ensemble Kalman Filter, was devised and explored

    Parameter selection in sparsity-driven SAR imaging

    Get PDF
    We consider a recently developed sparsity-driven synthetic aperture radar (SAR) imaging approach which can produce superresolution, feature-enhanced images. However, this regularization-based approach requires the selection of a hyper-parameter in order to generate such high-quality images. In this paper we present a number of techniques for automatically selecting the hyper-parameter involved in this problem. In particular, we propose and develop numerical procedures for the use of Stein’s unbiased risk estimation, generalized cross-validation, and L-curve techniques for automatic parameter choice. We demonstrate and compare the effectiveness of these procedures through experiments based on both simple synthetic scenes, as well as electromagnetically simulated realistic data. Our results suggest that sparsity-driven SAR imaging coupled with the proposed automatic parameter choice procedures offers significant improvements over conventional SAR imaging

    Sparse image reconstruction for molecular imaging

    Full text link
    The application that motivates this paper is molecular imaging at the atomic level. When discretized at sub-atomic distances, the volume is inherently sparse. Noiseless measurements from an imaging technology can be modeled by convolution of the image with the system point spread function (psf). Such is the case with magnetic resonance force microscopy (MRFM), an emerging technology where imaging of an individual tobacco mosaic virus was recently demonstrated with nanometer resolution. We also consider additive white Gaussian noise (AWGN) in the measurements. Many prior works of sparse estimators have focused on the case when H has low coherence; however, the system matrix H in our application is the convolution matrix for the system psf. A typical convolution matrix has high coherence. The paper therefore does not assume a low coherence H. A discrete-continuous form of the Laplacian and atom at zero (LAZE) p.d.f. used by Johnstone and Silverman is formulated, and two sparse estimators derived by maximizing the joint p.d.f. of the observation and image conditioned on the hyperparameters. A thresholding rule that generalizes the hard and soft thresholding rule appears in the course of the derivation. This so-called hybrid thresholding rule, when used in the iterative thresholding framework, gives rise to the hybrid estimator, a generalization of the lasso. Unbiased estimates of the hyperparameters for the lasso and hybrid estimator are obtained via Stein's unbiased risk estimate (SURE). A numerical study with a Gaussian psf and two sparse images shows that the hybrid estimator outperforms the lasso.Comment: 12 pages, 8 figure

    Bilateral Filter Evaluation Based on Exponential Kernels

    Get PDF
    The well-known bilateral filter is used to smooth noisy images while keeping their edges. This filter is commonly used with Gaussian kernel functions without real justification. The choice of the kernel functions has a major effect on the filter behavior. We propose to use exponential kernels with L1 distances instead of Gaussian ones. We derive Stein's Unbiased Risk Estimate to find the optimal parameters of the new filter and compare its performance with the conventional one. We show that this new choice of the kernels has a comparable smoothing effect but with sharper edges due to the faster, smoothly decaying kernels

    Parameter selection in non-quadratic regularization-based SAR imaging

    Get PDF
    Many remote sensing applications such as weather forecasting and automatic target recognition (ATR) require high-resolution images. Synthetic Aperture Radar (SAR) has become an important imaging technology for these remote sensing tasks through its all-weather, day and night imaging capability. However the effectiveness of SAR imaging for a specific decision making task depends on the quality of certain features in the formed imagery. For example, in order to be able to successively use a SAR image in an ATR system, the SAR image should exhibit features of the objects in the scene that are relevant for ATR. Recently, advanced SAR image formation techniques have been developed to produce feature-enhanced SAR images. In this thesis, we focus on one such technique, in particular a non-quadratic regularization-based approach which aims to produce so-called “point-enhanced SAR images”. The idea behind this approach is to emphasize appropriate features by means of regularizing the solution. The stability of the solution is ensured through a scalar parameter, called the regularization parameter, balancing the contribution of the data and the a priori constraints on the formed image. Automatic selection of the regularization parameter is an important issue since SAR images are ideally aimed to be used in fully automated systems. However this issue has not been addressed in previous work. To address the parameter selection problem in this image formation algorithm, we propose the use of Stein’s unbiased risk estimation, generalized cross-validation, and L-curve techniques which have been mostly used in quadratic regularization methods previously. We have adapted these methods to the SAR imaging framework, and have developed a number of numerical tools to enable their usage. We demonstrate the effectiveness of the applied methods through experiments based on both synthetic as well as electromagnetically simulated realistic data

    Nonlocal Means With Dimensionality Reduction and SURE-Based Parameter Selection

    Full text link

    Automatic Denoising and Unmixing in Hyperspectral Image Processing

    Get PDF
    This thesis addresses two important aspects in hyperspectral image processing: automatic hyperspectral image denoising and unmixing. The first part of this thesis is devoted to a novel automatic optimized vector bilateral filter denoising algorithm, while the remainder concerns nonnegative matrix factorization with deterministic annealing for unsupervised unmixing in remote sensing hyperspectral images. The need for automatic hyperspectral image processing has been promoted by the development of potent hyperspectral systems, with hundreds of narrow contiguous bands, spanning the visible to the long wave infrared range of the electromagnetic spectrum. Due to the large volume of raw data generated by such sensors, automatic processing in the hyperspectral images processing chain is preferred to minimize human workload and achieve optimal result. Two of the mostly researched processing for such automatic effort are: hyperspectral image denoising, which is an important preprocessing step for almost all remote sensing tasks, and unsupervised unmixing, which decomposes the pixel spectra into a collection of endmember spectral signatures and their corresponding abundance fractions. Two new methodologies are introduced in this thesis to tackle the automatic processing problems described above. Vector bilateral filtering has been shown to provide good tradeoff between noise removal and edge degradation when applied to multispectral/hyperspectral image denoising. It has also been demonstrated to provide dynamic range enhancement of bands that have impaired signal to noise ratios. Typical vector bilateral filtering usage does not employ parameters that have been determined to satisfy optimality criteria. This thesis also introduces an approach for selection of the parameters of a vector bilateral filter through an optimization procedure rather than by ad hoc means. The approach is based on posing the filtering problem as one of nonlinear estimation and minimizing the Stein\u27s unbiased risk estimate (SURE) of this nonlinear estimator. Along the way, this thesis provides a plausibility argument with an analytical example as to why vector bilateral filtering outperforms band-wise 2D bilateral filtering in enhancing SNR. Experimental results show that the optimized vector bilateral filter provides improved denoising performance on multispectral images when compared to several other approaches. Non-negative matrix factorization (NMF) technique and its extensions were developed to find part based, linear representations of non-negative multivariate data. They have been shown to provide more interpretable results with realistic non-negative constrain in unsupervised learning applications such as hyperspectral imagery unmixing, image feature extraction, and data mining. This thesis extends the NMF method by incorporating deterministic annealing optimization procedure, which will help solve the non-convexity problem in NMF and provide a better choice of sparseness constrain. The approach is based on replacing the difficult non-convex optimization problem of NMF with an easier one by adding an auxiliary convex entropy constrain term and solving this first. Experiment results with hyperspectral unmixing application show that the proposed technique provides improved unmixing performance compared to other state-of-the-art methods
    • …
    corecore