201 research outputs found

    What's in a Prior? Learned Proximal Networks for Inverse Problems

    Full text link
    Proximal operators are ubiquitous in inverse problems, commonly appearing as part of algorithmic strategies to regularize problems that are otherwise ill-posed. Modern deep learning models have been brought to bear for these tasks too, as in the framework of plug-and-play or deep unrolling, where they loosely resemble proximal operators. Yet, something essential is lost in employing these purely data-driven approaches: there is no guarantee that a general deep network represents the proximal operator of any function, nor is there any characterization of the function for which the network might provide some approximate proximal. This not only makes guaranteeing convergence of iterative schemes challenging but, more fundamentally, complicates the analysis of what has been learned by these networks about their training data. Herein we provide a framework to develop learned proximal networks (LPN), prove that they provide exact proximal operators for a data-driven nonconvex regularizer, and show how a new training strategy, dubbed proximal matching, provably promotes the recovery of the log-prior of the true data distribution. Such LPN provide general, unsupervised, expressive proximal operators that can be used for general inverse problems with convergence guarantees. We illustrate our results in a series of cases of increasing complexity, demonstrating that these models not only result in state-of-the-art performance, but provide a window into the resulting priors learned from data

    From Symmetry to Geometry: Tractable Nonconvex Problems

    Full text link
    As science and engineering have become increasingly data-driven, the role of optimization has expanded to touch almost every stage of the data analysis pipeline, from the signal and data acquisition to modeling and prediction. The optimization problems encountered in practice are often nonconvex. While challenges vary from problem to problem, one common source of nonconvexity is nonlinearity in the data or measurement model. Nonlinear models often exhibit symmetries, creating complicated, nonconvex objective landscapes, with multiple equivalent solutions. Nevertheless, simple methods (e.g., gradient descent) often perform surprisingly well in practice. The goal of this survey is to highlight a class of tractable nonconvex problems, which can be understood through the lens of symmetries. These problems exhibit a characteristic geometric structure: local minimizers are symmetric copies of a single "ground truth" solution, while other critical points occur at balanced superpositions of symmetric copies of the ground truth, and exhibit negative curvature in directions that break the symmetry. This structure enables efficient methods to obtain global minimizers. We discuss examples of this phenomenon arising from a wide range of problems in imaging, signal processing, and data analysis. We highlight the key role of symmetry in shaping the objective landscape and discuss the different roles of rotational and discrete symmetries. This area is rich with observed phenomena and open problems; we close by highlighting directions for future research.Comment: review paper submitted to SIAM Review, 34 pages, 10 figure

    Low-rank and sparse reconstruction in dynamic magnetic resonance imaging via proximal splitting methods

    Get PDF
    Dynamic magnetic resonance imaging (MRI) consists of collecting multiple MR images in time, resulting in a spatio-temporal signal. However, MRI intrinsically suffers from long acquisition times due to various constraints. This limits the full potential of dynamic MR imaging, such as obtaining high spatial and temporal resolutions which are crucial to observe dynamic phenomena. This dissertation addresses the problem of the reconstruction of dynamic MR images from a limited amount of samples arising from a nuclear magnetic resonance experiment. The term limited can be explained by the approach taken in this thesis to speed up scan time, which is based on violating the Nyquist criterion by skipping measurements that would be normally acquired in a standard MRI procedure. The resulting problem can be classified in the general framework of linear ill-posed inverse problems. This thesis shows how low-dimensional signal models, specifically lowrank and sparsity, can help in the reconstruction of dynamic images from partial measurements. The use of these models are justified by significant developments in signal recovery techniques from partial data that have emerged in recent years in signal processing. The major contributions of this thesis are the development and characterisation of fast and efficient computational tools using convex low-rank and sparse constraints via proximal gradient methods, the development and characterisation of a novel joint reconstruction–separation method via the low-rank plus sparse matrix decomposition technique, and the development and characterisation of low-rank based recovery methods in the context of dynamic parallel MRI. Finally, an additional contribution of this thesis is to formulate the various MR image reconstruction problems in the context of convex optimisation to develop algorithms based on proximal splitting methods

    Deep Learning Meets Sparse Regularization: A Signal Processing Perspective

    Full text link
    Deep learning has been wildly successful in practice and most state-of-the-art machine learning methods are based on neural networks. Lacking, however, is a rigorous mathematical theory that adequately explains the amazing performance of deep neural networks. In this article, we present a relatively new mathematical framework that provides the beginning of a deeper understanding of deep learning. This framework precisely characterizes the functional properties of neural networks that are trained to fit to data. The key mathematical tools which support this framework include transform-domain sparse regularization, the Radon transform of computed tomography, and approximation theory, which are all techniques deeply rooted in signal processing. This framework explains the effect of weight decay regularization in neural network training, the use of skip connections and low-rank weight matrices in network architectures, the role of sparsity in neural networks, and explains why neural networks can perform well in high-dimensional problems

    Space adaptive and hierarchical Bayesian variational models for image restoration

    Get PDF
    The main contribution of this thesis is the proposal of novel space-variant regularization or penalty terms motivated by a strong statistical rational. In light of the connection between the classical variational framework and the Bayesian formulation, we will focus on the design of highly flexible priors characterized by a large number of unknown parameters. The latter will be automatically estimated by setting up a hierarchical modeling framework, i.e. introducing informative or non-informative hyperpriors depending on the information at hand on the parameters. More specifically, in the first part of the thesis we will focus on the restoration of natural images, by introducing highly parametrized distribution to model the local behavior of the gradients in the image. The resulting regularizers hold the potential to adapt to the local smoothness, directionality and sparsity in the data. The estimation of the unknown parameters will be addressed by means of non-informative hyperpriors, namely uniform distributions over the parameter domain, thus leading to the classical Maximum Likelihood approach. In the second part of the thesis, we will address the problem of designing suitable penalty terms for the recovery of sparse signals. The space-variance in the proposed penalties, corresponding to a family of informative hyperpriors, namely generalized gamma hyperpriors, will follow directly from the assumption of the independence of the components in the signal. The study of the properties of the resulting energy functionals will thus lead to the introduction of two hybrid algorithms, aimed at combining the strong sparsity promotion characterizing non-convex penalty terms with the desirable guarantees of convex optimization
    • …
    corecore