999 research outputs found

    On the convergence of a heuristic parameter choice rule for Tikhonov regularization

    Get PDF
    Multiplicative regularization solves a linear inverse problem by minimizing the product of the norm of the data misfit and the norm of the solution. This technique is related to Tikhonov regularization with the parameter chosen to make the data misfit and regularization terms (of the Tikhonov objective function) equal. This suggests a heuristic parameter choice method, equivalent to the rule previously proposed by Reginska. Reginska\u27s rule is well defined provided the data is sufficiently close to exact data and does not lie in the range of the operator. If a sufficiently large portion of the data error lies outside the range of the operator, then the solution defined by Reginska\u27s rule converges weakly to the exact solution as the data error converges to zero. The regularization parameter converges to zero like the square of the norm of the data noise, leading to under-regularization for small noise levels. Nevertheless, the method performs well on a suite of test problems, as shown by comparison with the L-curve, generalized cross-validation, quasi-optimality, and Hanke--Raus parameter choice methods. A modification of the approach yields a heuristic parameter choice rule that is provably convergent (in the norm topology) under the restrictions on the data error described above, as long as the exact solution has a small amount of additional smoothness. On the test problems considered here, the modified rule outperforms all of the above heuristic methods, although it is only slightly better than the quasi-optimality rule

    Heuristic parameter-choice rules for convex variational regularization based on error estimates

    Full text link
    In this paper, we are interested in heuristic parameter choice rules for general convex variational regularization which are based on error estimates. Two such rules are derived and generalize those from quadratic regularization, namely the Hanke-Raus rule and quasi-optimality criterion. A posteriori error estimates are shown for the Hanke-Raus rule, and convergence for both rules is also discussed. Numerical results for both rules are presented to illustrate their applicability

    A new approach to nonlinear constrained Tikhonov regularization

    Full text link
    We present a novel approach to nonlinear constrained Tikhonov regularization from the viewpoint of optimization theory. A second-order sufficient optimality condition is suggested as a nonlinearity condition to handle the nonlinearity of the forward operator. The approach is exploited to derive convergence rates results for a priori as well as a posteriori choice rules, e.g., discrepancy principle and balancing principle, for selecting the regularization parameter. The idea is further illustrated on a general class of parameter identification problems, for which (new) source and nonlinearity conditions are derived and the structural property of the nonlinearity term is revealed. A number of examples including identifying distributed parameters in elliptic differential equations are presented.Comment: 21 pages, to appear in Inverse Problem
    • …
    corecore