269 research outputs found

    Modern Regularization Methods for Inverse Problems

    Get PDF
    Regularization methods are a key tool in the solution of inverse problems. They are used to introduce prior knowledge and allow a robust approximation of ill-posed (pseudo-) inverses. In the last two decades interest has shifted from linear to nonlinear regularization methods, even for linear inverse problems. The aim of this paper is to provide a reasonably comprehensive overview of this shift towards modern nonlinear regularization methods, including their analysis, applications and issues for future research. In particular we will discuss variational methods and techniques derived from them, since they have attracted much recent interest and link to other fields, such as image processing and compressed sensing. We further point to developments related to statistical inverse problems, multiscale decompositions and learning theory.Leverhulme Trust Early Career Fellowship ‘Learning from mistakes: a supervised feedback-loop for imaging applications’ Isaac Newton Trust Cantab Capital Institute for the Mathematics of Information ERC Grant EU FP 7 - ERC Consolidator Grant 615216 LifeInverse German Ministry for Science and Education (BMBF) project MED4D EPSRC grant EP/K032208/

    Explorations on anisotropic regularisation of dynamic inverse problems by bilevel optimisation

    Get PDF
    We explore anisotropic regularisation methods in the spirit of [Holler & Kunisch, 14]. Based on ground truth data, we propose a bilevel optimisation strategy to compute the optimal regularisation parameters of such a model for the application of video denoising. The optimisation poses a challenge in itself, as the dependency on one of the regularisation parameters is non-linear such that the standard existence and convergence theory does not apply. Moreover, we analyse numerical results of the proposed parameter learning strategy based on three exemplary video sequences and discuss the impact of these results on the actual modelling of dynamic inverse problems

    Ultrashort echo time (UTE) imaging using gradient pre-equalization and compressed sensing.

    Get PDF
    Ultrashort echo time (UTE) imaging is a well-known technique used in medical MRI, however, the implementation of the sequence remains non-trivial. This paper introduces UTE for non-medical applications and outlines a method for the implementation of UTE to enable accurate slice selection and short acquisition times. Slice selection in UTE requires fast, accurate switching of the gradient and r.f. pulses. Here a gradient "pre-equalization" technique is used to optimize the gradient switching and achieve an effective echo time of 10ÎŒs. In order to minimize the echo time, k-space is sampled radially. A compressed sensing approach is used to minimize the total acquisition time. Using the corrections for slice selection and acquisition along with novel image reconstruction techniques, UTE is shown to be a viable method to study samples of cork and rubber with a shorter signal lifetime than can typically be measured. Further, the compressed sensing image reconstruction algorithm is shown to provide accurate images of the samples with as little as 12.5% of the full k-space data set, potentially permitting real time imaging of short T2(*) materials.HTF would like to acknowledge the financial support of the Gates-Cambridge Trust and all authors of the EPSRC (EP/K008218/1). In addition, we would like to thank SoftPoint Industries Inc. for providing samples of rubber.This version is final published version, distributed under a Creative Commons Attribution License 2.0. This can also be viewed on the publisher's website at: http://www.sciencedirect.com/science/article/pii/S1090780714001840

    Choose your path wisely: gradient descent in a Bregman distance framework

    Get PDF
    We propose an extension of a special form of gradient descent --- in the literature known as linearised Bregman iteration -- to a larger class of non-convex functions. We replace the classical (squared) two norm metric in the gradient descent setting with a generalised Bregman distance, based on a proper, convex and lower semi-continuous function. The algorithm's global convergence is proven for functions that satisfy the Kurdyka-\L ojasiewicz property. Examples illustrate that features of different scale are being introduced throughout the iteration, transitioning from coarse to fine. This coarse-to-fine approach with respect to scale allows to recover solutions of non-convex optimisation problems that are superior to those obtained with conventional gradient descent, or even projected and proximal gradient descent. The effectiveness of the linearised Bregman iteration in combination with early stopping is illustrated for the applications of parallel magnetic resonance imaging, blind deconvolution as well as image classification with neural networks

    Choose your Path Wisely:Gradient Descent in a Bregman Distance Framework

    Get PDF
    We propose an extension of a special form of gradient descent --- in the literature known as linearised Bregman iteration --- to a larger class of non-convex functions. We replace the classical (squared) two norm metric in the gradient descent setting with a generalised Bregman distance, based on a proper, convex and lower semi-continuous function. The algorithm's global convergence is proven for functions that satisfy the Kurdyka-Lojasiewicz property. Examples illustrate that features of different scale are being introduced throughout the iteration, transitioning from coarse to fine. This coarse-to-fine approach with respect to scale allows to recover solutions of non-convex optimisation problems that are superior to those obtained with conventional gradient descent, or even projected and proximal gradient descent. The effectiveness of the linearised Bregman iteration in combination with early stopping is illustrated for the applications of parallel magnetic resonance imaging, blind deconvolution as well as image classification with neural networks

    Learning parametrised regularisation functions via quotient minimisation

    Get PDF
    We propose a novel strategy for the computation of adaptive regularisation functions. The general strategy consists of minimising the ratio of a parametrised regularisation function; the numerator contains the regulariser with a desirable training signal as its argument, whereas the denominator contains the same regulariser but with its argument being a training signal one wants to avoid. The rationale behind this is to adapt parametric regularisations to given training data that contain both wanted and unwanted outcomes. We discuss the numerical implementation of this minimisation problem for a specific parametrisation, and present preliminary numerical results which demonstrate that this approach is able to recover total variation as well as second-order total variation regularisation from suitable training data.MB and CBS acknowledge support from the EPSRC grant EP/M00483X/1 and the Leverhulme grant ’Breaking the non-convexity barrier’. GG acknowledges support from the Israel Science Foundation (grant No. 718/15) and by the Magnet program of the OCS, Israel Ministry of Economy, in the framework of Omek Consortium.This is the author accepted manuscript. The final version is available from Wiley via http://dx.doi.org/10.1002/pamm.20161045
    • 

    corecore