7,616 research outputs found

    Adaptive complexity regularization for linear inverse problems

    Full text link
    We tackle the problem of building adaptive estimation procedures for ill-posed inverse problems. For general regularization methods depending on tuning parameters, we construct a penalized method that selects the optimal smoothing sequence without prior knowledge of the regularity of the function to be estimated. We provide for such estimators oracle inequalities and optimal rates of convergence. This penalized approach is applied to Tikhonov regularization and to regularization by projection.Comment: Published in at http://dx.doi.org/10.1214/07-EJS115 the Electronic Journal of Statistics (http://www.i-journals.org/ejs/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Empirical risk minimization as parameter choice rule for general linear regularization methods.

    No full text
    We consider the statistical inverse problem to recover f from noisy measurements Y = Tf + sigma xi where xi is Gaussian white noise and T a compact operator between Hilbert spaces. Considering general reconstruction methods of the form (f) over cap (alpha) = q(alpha) (T*T)T*Y with an ordered filter q(alpha), we investigate the choice of the regularization parameter alpha by minimizing an unbiased estiate of the predictive risk E[parallel to T f - T (f) over cap (alpha)parallel to(2)]. The corresponding parameter alpha(pred) and its usage are well-known in the literature, but oracle inequalities and optimality results in this general setting are unknown. We prove a (generalized) oracle inequality, which relates the direct risk E[parallel to f - (f) over cap (alpha pred)parallel to(2)] with the oracle prediction risk inf(alpha>0) E[parallel to T f - T (f) over cap (alpha)parallel to(2)]. From this oracle inequality we are then able to conclude that the investigated parameter choice rule is of optimal order in the minimax sense. Finally we also present numerical simulations, which support the order optimality of the method and the quality of the parameter choice in finite sample situations

    On the filtering effect of iterative regularization algorithms for linear least-squares problems

    Full text link
    Many real-world applications are addressed through a linear least-squares problem formulation, whose solution is calculated by means of an iterative approach. A huge amount of studies has been carried out in the optimization field to provide the fastest methods for the reconstruction of the solution, involving choices of adaptive parameters and scaling matrices. However, in presence of an ill-conditioned model and real data, the need of a regularized solution instead of the least-squares one changed the point of view in favour of iterative algorithms able to combine a fast execution with a stable behaviour with respect to the restoration error. In this paper we want to analyze some classical and recent gradient approaches for the linear least-squares problem by looking at their way of filtering the singular values, showing in particular the effects of scaling matrices and non-negative constraints in recovering the correct filters of the solution
    • …
    corecore