2,889 research outputs found

    Local Analysis of Inverse Problems: H\"{o}lder Stability and Iterative Reconstruction

    Full text link
    We consider a class of inverse problems defined by a nonlinear map from parameter or model functions to the data. We assume that solutions exist. The space of model functions is a Banach space which is smooth and uniformly convex; however, the data space can be an arbitrary Banach space. We study sequences of parameter functions generated by a nonlinear Landweber iteration and conditions under which these strongly converge, locally, to the solutions within an appropriate distance. We express the conditions for convergence in terms of H\"{o}lder stability of the inverse maps, which ties naturally to the analysis of inverse problems

    Zero-Convex Functions, Perturbation Resilience, and Subgradient Projections for Feasibility-Seeking Methods

    Full text link
    The convex feasibility problem (CFP) is at the core of the modeling of many problems in various areas of science. Subgradient projection methods are important tools for solving the CFP because they enable the use of subgradient calculations instead of orthogonal projections onto the individual sets of the problem. Working in a real Hilbert space, we show that the sequential subgradient projection method is perturbation resilient. By this we mean that under appropriate conditions the sequence generated by the method converges weakly, and sometimes also strongly, to a point in the intersection of the given subsets of the feasibility problem, despite certain perturbations which are allowed in each iterative step. Unlike previous works on solving the convex feasibility problem, the involved functions, which induce the feasibility problem's subsets, need not be convex. Instead, we allow them to belong to a wider and richer class of functions satisfying a weaker condition, that we call "zero-convexity". This class, which is introduced and discussed here, holds a promise to solve optimization problems in various areas, especially in non-smooth and non-convex optimization. The relevance of this study to approximate minimization and to the recent superiorization methodology for constrained optimization is explained.Comment: Mathematical Programming Series A, accepted for publicatio

    Elastic-Net Regularization in Learning Theory

    Get PDF
    Within the framework of statistical learning theory we analyze in detail the so-called elastic-net regularization scheme proposed by Zou and Hastie for the selection of groups of correlated variables. To investigate on the statistical properties of this scheme and in particular on its consistency properties, we set up a suitable mathematical framework. Our setting is random-design regression where we allow the response variable to be vector-valued and we consider prediction functions which are linear combination of elements ({\em features}) in an infinite-dimensional dictionary. Under the assumption that the regression function admits a sparse representation on the dictionary, we prove that there exists a particular ``{\em elastic-net representation}'' of the regression function such that, if the number of data increases, the elastic-net estimator is consistent not only for prediction but also for variable/feature selection. Our results include finite-sample bounds and an adaptive scheme to select the regularization parameter. Moreover, using convex analysis tools, we derive an iterative thresholding algorithm for computing the elastic-net solution which is different from the optimization procedure originally proposed by Zou and HastieComment: 32 pages, 3 figure
    corecore