47 research outputs found

    Computational Inverse Problems

    Get PDF
    Inverse problem typically deal with the identification of unknown quantities from indirect measurements and appear in many areas in technology, medicine, biology, finance, and econometrics. The computational solution of such problems is a very active, interdisciplinary field with close connections to optimization, control theory, differential equations, asymptotic analysis, statistics, and probability. The focus of this workshop was on hybrid methods, model reduction, regularization in Banach spaces, and statistical approaches

    Deep learning model-aware regulatization with applications to Inverse Problems

    Get PDF
    There are various inverse problems – including reconstruction problems arising in medical imaging - where one is often aware of the forward operator that maps variables of interest to the observations. It is therefore natural to ask whether such knowledge of the forward operator can be exploited in deep learning approaches increasingly used to solve inverse problems. In this paper, we provide one such way via an analysis of the generalisation error of deep learning approaches to inverse problems. In particular, by building on the algorithmic robustness framework, we offer a generalisation error bound that encapsulates key ingredients associated with the learning problem such as the complexity of the data space, the size of the training set, the Jacobian of the deep neural network and the Jacobian of the composition of the forward operator with the neural network. We then propose a ‘plug-and-play’ regulariser that leverages the knowledge of the forward map to improve the generalization of the network. We likewise also use a new method allowing us to tightly upper bound the Jacobians of the relevant operators that is much more computationally efficient than existing ones. We demonstrate the efficacy of our model-aware regularised deep learning algorithms against other state-of-the-art approaches on inverse problems involving various sub-sampling operators such as those used in classical compressed sensing tasks, image super-resolution problems and accelerated Magnetic Resonance Imaging (MRI) setups

    Sparse variational regularization for visual motion estimation

    Get PDF
    The computation of visual motion is a key component in numerous computer vision tasks such as object detection, visual object tracking and activity recognition. Despite exten- sive research effort, efficient handling of motion discontinuities, occlusions and illumina- tion changes still remains elusive in visual motion estimation. The work presented in this thesis utilizes variational methods to handle the aforementioned problems because these methods allow the integration of various mathematical concepts into a single en- ergy minimization framework. This thesis applies the concepts from signal sparsity to the variational regularization for visual motion estimation. The regularization is designed in such a way that it handles motion discontinuities and can detect object occlusions

    Lagrangian methods for the regularization of discrete ill-posed problems

    Get PDF
    In many science and engineering applications, the discretization of linear ill-posed problems gives rise to large ill-conditioned linear systems with right-hand side degraded by noise. The solution of such linear systems requires the solution of a minimization problem with one quadratic constraint depending on an estimate of the variance of the noise. This strategy is known as regularization. In this work, we propose to use Lagrangian methods for the solution of the noise constrained regularization problem. Moreover, we introduce a new method based on Lagrangian methods and the discrepancy principle. We present numerical results on numerous test problems, image restoration and medical imaging denoising. Our results indicate that the proposed strategies are effective and efficient in computing good regularized solutions of ill-conditioned linear systems as well as the corresponding regularization parameters. Therefore, the proposed methods are actually a promising approach to deal with ill-posed problems

    Regularization graphs—a unified framework for variational regularization of inverse problems

    Get PDF
    We introduce and study a mathematical framework for a broad class of regularization functionals for ill-posed inverse problems: Regularization Graphs. Regularization graphs allow to construct functionals using as building blocks linear operators and convex functionals, assembled by means of operators that can be seen as generalizations of classical infimal convolution operators. This class of functionals exhaustively covers existing regularization approaches and it is flexible enough to craft new ones in a simple and constructive way. We provide well-posedness and convergence results with the proposed class of functionals in a general setting. Further, we consider a bilevel optimization approach to learn optimal weights for such regularization graphs from training data. We demonstrate that this approach is capable of optimizing the structure and the complexity of a regularization graph, allowing, for example, to automatically select a combination of regularizers that is optimal for given training data

    Block-sparse beamforming for spatially extended sources in a Bayesian formulation

    Get PDF
    Direction-of-arrival (DOA) estimation refers to the localization of sound sources on an angular grid from noisy measurements of the associated wavefield with an array of sensors. For accurate localization, the number of angular look-directions is much larger than the number of sensors, hence, the problem is underdetermined and requires regularization. Traditional methods use an L2-norm regularizer, which promotes minimum-power (smooth) solutions, while regularizing with L1-norm promotes sparsity. Sparse signal reconstruction improves the resolution in DOA estimation in the presence of a few point sources, but cannot capture spatially extended sources. The DOA estimation problem is formulated in a Bayesian framework where regularization is imposed through prior information on the source spatial distribution which is then reconstructed as the maximum a posteriori estimate. A composite prior is introduced, which simultaneously promotes a piecewise constant profile and sparsity in the solution. Simulations and experimental measurements show that this choice of regularization provides high-resolution DOA estimation in a general framework, i.e., in the presence of spatially extended sources

    Uniformly convex neural networks and non-stationary iterated network Tikhonov (iNETT) method

    Full text link
    We propose a non-stationary iterated network Tikhonov (iNETT) method for the solution of ill-posed inverse problems. The iNETT employs deep neural networks to build a data-driven regularizer, and it avoids the difficult task of estimating the optimal regularization parameter. To achieve the theoretical convergence of iNETT, we introduce uniformly convex neural networks to build the data-driven regularizer. Rigorous theories and detailed algorithms are proposed for the construction of convex and uniformly convex neural networks. In particular, given a general neural network architecture, we prescribe sufficient conditions to achieve a trained neural network which is component-wise convex or uniformly convex; moreover, we provide concrete examples of realizing convexity and uniform convexity in the modern U-net architecture. With the tools of convex and uniformly convex neural networks, the iNETT algorithm is developed and a rigorous convergence analysis is provided. Lastly, we show applications of the iNETT algorithm in 2D computerized tomography, where numerical examples illustrate the efficacy of the proposed algorithm

    Image restoration via the adaptive TVp regularization

    Get PDF
    To keep structures in the restoration problem is very important via coupling the local information of the image with the proposed model. In this paper we propose a local self-adaptive ℓp -regularization model for p ∈ (0, 2) based on the total variation scheme, where the choice of p depends on the local structures described by the eigenvalues of the structure tensor. Since the proposed model as the classic ℓp problem unifies two classes of optimization problems such as the nonconvex and nonsmooth problem when p ∈ (0, 1), and the convex and smooth problem when p ∈ (1, 2), it is generally challenging to find a ready algorithmic framework to solve it. Here we propose a new and robust numerical method via coupling with the half-quadratic scheme and the alternating direction method of multipliers (ADMM). The convergence of the proposed algorithm is established and the numerical experiments illustrate the possible advantages of the proposed model and numerical methods over some existing variational-based models and methods

    A machine learning approach to optimal Tikhonov regularization I: Affine manifolds

    Get PDF
    Despite a variety of available techniques, such as discrepancy principle, generalized cross validation, and balancing principle, the issue of the proper regularization parameter choice for inverse problems still remains one of the relevant challenges in the field. The main difficulty lies in constructing an efficient rule, allowing to compute the parameter from given noisy data without relying either on any a priori knowledge of the solution, noise level or on the manual input. In this paper, we propose a novel method based on a statistical learning theory framework to approximate the high-dimensional function, which maps noisy data to the optimal Tikhonov regularization parameter. After an offline phase where we observe samples of the noisy data-to-optimal parameter mapping, an estimate of the optimal regularization parameter is computed directly from noisy data. Our assumptions are that ground truth solutions of the inverse problem are statistically distributed in a concentrated manner on (lower-dimensional) linear subspaces and the noise is sub-gaussian. We show that for our method to be efficient, the number of previously observed samples of the noisy data-to-optimal parameter mapping needs to scale at most linearly with the dimension of the solution subspace. We provide explicit error bounds on the approximation accuracy from noisy data of unobserved optimal regularization parameters and ground truth solutions. Even though the results are more of theoretical nature, we present a recipe for the practical implementation of the approach. We conclude with presenting numerical experiments verifying our theoretical results and illustrating the superiority of our method with respect to several state-of-the-art approaches in terms of accuracy or computational time for solving inverse problems of various types
    corecore