56 research outputs found

    A comparison of parameter choice rules for ℓp - ℓq minimization

    Get PDF
    Images that have been contaminated by various kinds of blur and noise can be restored by the minimization of an ℓp-ℓq functional. The quality of the reconstruction depends on the choice of a regularization parameter. Several approaches to determine this parameter have been described in the literature. This work presents a numerical comparison of known approaches as well as of a new one

    Nearly exact discrepancy principle for low-count poisson image restoration

    Get PDF
    The effectiveness of variational methods for restoring images corrupted by Poisson noise strongly depends on the suitable selection of the regularization parameter balancing the effect of the regulation term(s) and the generalized Kullback–Liebler divergence data term. One of the approaches still commonly used today for choosing the parameter is the discrepancy principle proposed by Zanella et al. in a seminal work. It relies on imposing a value of the data term approximately equal to its expected value and works well for mid-and high-count Poisson noise corruptions. However, the series truncation approximation used in the theoretical derivation of the expected value leads to poor performance for low-count Poisson noise. In this paper, we highlight the theoretical limits of the approach and then propose a nearly exact version of it based on Monte Carlo simulation and weighted least-square fitting. Several numerical experiments are presented, proving beyond doubt that in the low-count Poisson regime, the proposed modified, nearly exact discrepancy principle performs far better than the original, approximated one by Zanella et al., whereas it works similarly or slightly better in the mid-and high-count regimes

    Residual Whiteness Principle for Automatic Parameter Selection in ℓ2 - ℓ2 Image Super-Resolution Problems

    Get PDF
    We propose an automatic parameter selection strategy for variational image super-resolution of blurred and down-sampled images corrupted by additive white Gaussian noise (AWGN) with unknown standard deviation. By exploiting particular properties of the operators describing the problem in the frequency domain, our strategy selects the optimal parameter as the one optimising a suitable residual whiteness measure. Numerical tests show the effectiveness of the proposed strategy for generalised ℓ2 - ℓ2 Tikhonov problems

    Sparsity promoting hybrid solvers for hierarchical bayesian inverse problems

    No full text
    The recovery of sparse generative models from few noisy measurements is an important and challenging problem. Many deterministic algorithms rely on some form of ℓ1-ℓ2 minimization to combine the computational convenience of the ℓ2 penalty and the sparsity promotion of the ℓ1. It was recently shown within the Bayesian framework that sparsity promotion and computational efficiency can be attained with hierarchical models with conditionally Gaussian priors and gamma hyperpriors. The related Gibbs energy function is a convex functional, and its minimizer, which is the maximum a posteriori (MAP) estimate of the posterior, can be computed efficiently with the globally convergent Iterated Alternating Sequential (IAS) algorithm [D. Calvetti, E. Somersalo, and A. Strang, Inverse Problems, 35 (2019), 035003]. Generalization of the hyperpriors for these sparsity promoting hierarchical models to a generalized gamma family either yield globally convex Gibbs energy functionals or can exhibit local convexity for some choices for the hyperparameters [D. Calvetti et al., Inverse Problems, 36 (2020), 025010]. The main problem in computing the MAP solution for greedy hyperpriors that strongly promote sparsity is the presence of local minima. To overcome the premature stopping at a spurious local minimizer, we propose two hybrid algorithms that first exploit the global convergence associated with gamma hyperpriors to arrive in a neighborhood of the unique minimizer and then adopt a generalized gamma hyperprior that promotes sparsity more strongly. The performance of the two algorithms is illustrated with computed examples

    Automatic fidelity and regularization terms selection in variational image restoration

    No full text
    This paper addresses the study of a class of variational models for the image restoration inverse problem. The main assumption is that the additive noise model and the image gradient magnitudes follow a generalized normal (GN) distribution, whose very flexible probability density function (pdf) is characterized by two parameters—typically unknown in real world applications—determining its shape and scale. The unknown image and parameters, which are both modeled as random variables in light of the hierarchical Bayesian perspective adopted here, are jointly automatically estimated within a Maximum A Posteriori (MAP) framework. The hypermodels resulting from the selected prior, likelihood and hyperprior pdfs are minimized by means of an alternating scheme which benefits from a robust initialization based on the noise whiteness property. For the minimization problem with respect to the image, the Alternating Direction Method of Multipliers (ADMM) algorithm, which takes advantage of efficient procedures for the solution of proximal maps, is employed. Computed examples show that the proposed approach holds the potential to automatically detect the noise distribution, and it is also well-suited to process a wide range of images
    • 

    corecore