2 research outputs found

    AN ANALYSIS OF MULTIPLICATIVE REGULARIZATION

    Get PDF
    Inverse problems arise in many branches of science and engineering. In order to get a good approximation of the solution of this kind of problems, the use of regularization methods is required. Tikhonov regularization is one of the most popular methods for estimating the solutions of inverse problems. This method needs a regularization parameter and the quality of the approximate solution depends on how good the regularization parameter is. The L-curve method is a convenient parameter choice strategy for selecting the Tikhonov regularization parameter and it works well most of the time. There are some problems in which the L-curve criterion does not perform properly. Multiplicative regularization is a method for solving inverse problems and does not require any parameter selection strategies. However, it turns out that there is a close connection between multiplicative regularization and Tikhonov regularization; in fact, multiplicative regularization can be regarded as defining a parameter choice rule for Tikhonov regularization. In this work, we have analyzed multiplicative regularization for finite-dimensional problems. We also have presented some preliminary theoretical results for infinite-dimensional problems. Furthermore, we have demonstrated with numerical experiments that the multiplicative regularization method produces a solution that is usually very similar to the solution obtained by the L-curve method. This method is guaranteed to define a positive regularization parameter under some conditions. Computationally, this method is not expensive and is easier to analyze compared to the L-curve method

    On the convergence of a heuristic parameter choice rule for Tikhonov regularization

    Get PDF
    Multiplicative regularization solves a linear inverse problem by minimizing the product of the norm of the data misfit and the norm of the solution. This technique is related to Tikhonov regularization with the parameter chosen to make the data misfit and regularization terms (of the Tikhonov objective function) equal. This suggests a heuristic parameter choice method, equivalent to the rule previously proposed by Reginska. Reginska\u27s rule is well defined provided the data is sufficiently close to exact data and does not lie in the range of the operator. If a sufficiently large portion of the data error lies outside the range of the operator, then the solution defined by Reginska\u27s rule converges weakly to the exact solution as the data error converges to zero. The regularization parameter converges to zero like the square of the norm of the data noise, leading to under-regularization for small noise levels. Nevertheless, the method performs well on a suite of test problems, as shown by comparison with the L-curve, generalized cross-validation, quasi-optimality, and Hanke--Raus parameter choice methods. A modification of the approach yields a heuristic parameter choice rule that is provably convergent (in the norm topology) under the restrictions on the data error described above, as long as the exact solution has a small amount of additional smoothness. On the test problems considered here, the modified rule outperforms all of the above heuristic methods, although it is only slightly better than the quasi-optimality rule
    corecore