29 research outputs found

    Bregman Cost for Non-Gaussian Noise

    Get PDF
    One of the tasks of the Bayesian inverse problem is to find a good estimate based on the posterior probability density. The most common point estimators are the conditional mean (CM) and maximum a posteriori (MAP) estimates, which correspond to the mean and the mode of the posterior, respectively. From a theoretical point of view it has been argued that the MAP estimate is only in an asymptotic sense a Bayes estimator for the uniform cost function, while the CM estimate is a Bayes estimator for the means squared cost function. Recently, it has been proven that the MAP estimate is a proper Bayes estimator for the Bregman cost if the image is corrupted by Gaussian noise. In this work we extend this result to other noise models with log-concave likelihood density, by introducing two related Bregman cost functions for which the CM and the MAP estimates are proper Bayes estimators. Moreover, we also prove that the CM estimate outperforms the MAP estimate, when the error is measured in a certain Bregman distance, a result previously unknown also in the case of additive Gaussian noise

    Hyperparameter optimization with approximate gradient

    Full text link
    Most models in machine learning contain at least one hyperparameter to control for model complexity. Choosing an appropriate set of hyperparameters is both crucial in terms of model accuracy and computationally challenging. In this work we propose an algorithm for the optimization of continuous hyperparameters using inexact gradient information. An advantage of this method is that hyperparameters can be updated before model parameters have fully converged. We also give sufficient conditions for the global convergence of this method, based on regularity conditions of the involved functions and summability of errors. Finally, we validate the empirical performance of this method on the estimation of regularization constants of L2-regularized logistic regression and kernel Ridge regression. Empirical benchmarks indicate that our approach is highly competitive with respect to state of the art methods.Comment: Proceedings of the International conference on Machine Learning (ICML

    Solving ill-posed inverse problems using iterative deep neural networks

    Full text link
    We propose a partially learned approach for the solution of ill posed inverse problems with not necessarily linear forward operators. The method builds on ideas from classical regularization theory and recent advances in deep learning to perform learning while making use of prior information about the inverse problem encoded in the forward operator, noise model and a regularizing functional. The method results in a gradient-like iterative scheme, where the "gradient" component is learned using a convolutional network that includes the gradients of the data discrepancy and regularizer as input in each iteration. We present results of such a partially learned gradient scheme on a non-linear tomographic inversion problem with simulated data from both the Sheep-Logan phantom as well as a head CT. The outcome is compared against FBP and TV reconstruction and the proposed method provides a 5.4 dB PSNR improvement over the TV reconstruction while being significantly faster, giving reconstructions of 512 x 512 volumes in about 0.4 seconds using a single GPU

    Adversarial regularizers in inverse problems

    Get PDF
    Inverse Problems in medical imaging and computer vision are traditionally solved using purely model-based methods. Among those variational regularization models are one of the most popular approaches. We propose a new framework for applying data-driven approaches to inverse problems, using a neural network as a regularization functional. The network learns to discriminate between the distribution of ground truth images and the distribution of unregularized reconstructions. Once trained, the network is applied to the inverse problem by solving the corresponding variational problem. Unlike other data-based approaches for inverse problems, the algorithm can be applied even if only unsupervised training data is available. Experiments demonstrate the potential of the framework for denoising on the BSDS dataset and for computed tomography reconstruction on the LIDC dataset.The authors acknowledge the National Cancer Institute and the Foundation for the National Institutes of Health, and their critical role in the creation of the free publicly available LIDC/IDRI Database used in this study. The work by Sebastian Lunz was supported by the EPSRC grant EP/L016516/1 for the University of Cambridge Centre for Doctoral Training, the Cambridge Centre for Analysis and by the Cantab Capital Institute for the Mathematics of Information. The work by Ozan Öktem was supported by the Swedish Foundation for Strategic Research grant AM13-0049. Carola-Bibiane Schönlieb acknowledges support from the Leverhulme Trust project on ‘Breaking the non-convexity barrier’, EPSRC grant Nr. EP/M00483X/1, the EPSRC Centre Nr. EP/N014588/1, the RISE projects CHiPS and NoMADS, the Cantab Capital Institute for the Mathematics of Information and the Alan Turing Institute

    A flexible space-variant anisotropic regularisation for image restoration with automated parameter selection

    Get PDF
    We propose a new space-variant anisotropic regularisation term for variational image restoration, based on the statistical assumption that the gradients of the target image distribute locally according to a bivariate generalised Gaussian distribution. The highly flexible variational structure of the corresponding regulariser encodes several free parameters which hold the potential for faithfully modelling the local geometry in the image and describing local orientation preferences. For an automatic estimation of such parameters, we design a robust maximum likelihood approach and report results on its reliability on synthetic data and natural images. For the numerical solution of the corresponding image restoration model, we use an iterative algorithm based on the Alternating Direction Method of Multipliers (ADMM). A suitable preliminary variable splitting together with a novel result in multivariate non-convex proximal calculus yield a very efficient minimisation algorithm. Several numerical results showing significant quality-improvement of the proposed model with respect to some related state-of-the-art competitors are reported, in particular in terms of texture and detail preservation
    corecore