393 research outputs found

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of â„“2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem

    Solving Inverse Conductivity Problems In Doubly Connected Domains By the Homogenization Functions of Two Parameters

    Get PDF
    In the paper, we make the first attempt to derive a family of two-parameter homogenization functions in the doubly connected domain, which is then applied as the bases of trial solutions for the inverse conductivity problems. The expansion coefficients are obtained by imposing an extra boundary condition on the inner boundary, which results in a linear system for the interpolation of the solution in a weighted Sobolev space. Then, we retrieve the spatial- or temperature-dependent conductivity function by solving a linear system, which is obtained from the collocation method applied to the nonlinear elliptic equation after inserting the solution. Although the required data are quite economical, very accurate solutions of the space-dependent and temperature-dependent conductivity functions, the Robin coefficient function and also the source function are available. It is significant that the nonlinear inverse problems can be solved directly without iterations and solving nonlinear equations. The proposed method can achieve accurate results with high efficiency even for large noise being imposed on the input data

    7. Minisymposium on Gauss-type Quadrature Rules: Theory and Applications

    Get PDF

    Variable Selection and Estimation in Multivariate Functional Linear Regression via the LASSO

    Full text link
    In more and more applications, a quantity of interest may depend on several covariates, with at least one of them infinite-dimensional (e.g. a curve). To select the relevant covariates in this context, we propose an adaptation of the Lasso method. Two estimation methods are defined. The first one consists in the minimisation of a criterion inspired by classical Lasso inference under group sparsity (Yuan and Lin, 2006; Lounici et al., 2011) on the whole multivariate functional space H. The second one minimises the same criterion but on a finite-dimensional subspace of H which dimension is chosen by a penalized leasts-squares method base on the work of Barron et al. (1999). Sparsity-oracle inequalities are proven in case of fixed or random design in our infinite-dimensional context. To calculate the solutions of both criteria, we propose a coordinate-wise descent algorithm, inspired by the glmnet algorithm (Friedman et al., 2007). A numerical study on simulated and experimental datasets illustrates the behavior of the estimators

    VARIABLE SELECTION AND ESTIMATION IN MULTIVARIATE FUNCTIONAL LINEAR REGRESSION VIA THE LASSO

    Get PDF
    In more and more applications, a quantity of interest may depend on several covariates, with at least one of them infinite-dimensional (e.g. a curve). To select the relevant covariates in this context, we propose an adaptation of the Lasso method. Two estimation methods are defined. The first one consists in the minimisation of a criterion inspired by classical Lasso inference under group sparsity (Yuan and Lin, 2006; Lounici et al., 2011) on the whole multivariate functional space H. The second one minimises the same criterion but on a finite-dimensional subspace of H which dimension is chosen by a penalized leasts-squares method base on the work of Barron et al. (1999). Sparsity- oracle inequalities are proven in case of fixed or random design in our infinite-dimensional context. To calculate the solutions of both criteria, we propose a coordinate-wise descent algorithm, inspired by the glmnet algorithm (Friedman et al., 2007). A numerical study on simulated and experimental datasets illustrates the behavior of the estimators

    Variational regularization theory for sparsity promoting wavelet regularization

    Get PDF
    In many scientific and industrial applications, the quantity of interest is not what is directly observed, but is instead a parameter which has a causal effect on experimental measurements. To obtain the desired unknown quantity, one must use an inverse transform on the data. The main challenge in such an inverse problem is that these unknowns may not continuously depend on the observations, and as a result, the effects of noise in data are magnified in the inverted results. To obtain stable approximations of the desired parameters from noisy observations, regularization methods are used. This thesis contributes to the mathematical analysis of generalized Tikhonov regularization, and in particular sparsity promoting Tikhonov regularization, which are popular examples of regularization methods. Using variational source conditions as an intermediate step, order optimal upper bounds on the reconstruction error are shown for sparsity promoting wavelet regularization under smoothness assumptions given by Besov spaces. The framework includes practically relevant forward operators, such as the Radon transform, and some nonlinear inverse problems in differential equations with distributed measurements. In numerical simulations for a parameter identification problem in a differential equation it is demonstrated that these theoretical results correctly predict convergence rates for piecewise smooth unknown coefficients.2022-02-0
    • …
    corecore