1,591 research outputs found

    Embedded techniques for choosing the parameter in Tikhonov regularization

    Full text link
    This paper introduces a new strategy for setting the regularization parameter when solving large-scale discrete ill-posed linear problems by means of the Arnoldi-Tikhonov method. This new rule is essentially based on the discrepancy principle, although no initial knowledge of the norm of the error that affects the right-hand side is assumed; an increasingly more accurate approximation of this quantity is recovered during the Arnoldi algorithm. Some theoretical estimates are derived in order to motivate our approach. Many numerical experiments, performed on classical test problems as well as image deblurring are presented

    Projected Newton Method for noise constrained Tikhonov regularization

    Full text link
    Tikhonov regularization is a popular approach to obtain a meaningful solution for ill-conditioned linear least squares problems. A relatively simple way of choosing a good regularization parameter is given by Morozov's discrepancy principle. However, most approaches require the solution of the Tikhonov problem for many different values of the regularization parameter, which is computationally demanding for large scale problems. We propose a new and efficient algorithm which simultaneously solves the Tikhonov problem and finds the corresponding regularization parameter such that the discrepancy principle is satisfied. We achieve this by formulating the problem as a nonlinear system of equations and solving this system using a line search method. We obtain a good search direction by projecting the problem onto a low dimensional Krylov subspace and computing the Newton direction for the projected problem. This projected Newton direction, which is significantly less computationally expensive to calculate than the true Newton direction, is then combined with a backtracking line search to obtain a globally convergent algorithm, which we refer to as the Projected Newton method. We prove convergence of the algorithm and illustrate the improved performance over current state-of-the-art solvers with some numerical experiments

    Numerical analysis of least squares and perceptron learning for classification problems

    Get PDF
    This work presents study on regularized and non-regularized versions of perceptron learning and least squares algorithms for classification problems. Fr'echet derivatives for regularized least squares and perceptron learning algorithms are derived. Different Tikhonov's regularization techniques for choosing the regularization parameter are discussed. Decision boundaries obtained by non-regularized algorithms to classify simulated and experimental data sets are analyzed

    Online Local Volatility Calibration by Convex Regularization with Morozov's Principle and Convergence Rates

    Full text link
    We address the inverse problem of local volatility surface calibration from market given option prices. We integrate the ever-increasing flow of option price information into the well-accepted local volatility model of Dupire. This leads to considering both the local volatility surfaces and their corresponding prices as indexed by the observed underlying stock price as time goes by in appropriate function spaces. The resulting parameter to data map is defined in appropriate Bochner-Sobolev spaces. Under this framework, we prove key regularity properties. This enable us to build a calibration technique that combines online methods with convex Tikhonov regularization tools. Such procedure is used to solve the inverse problem of local volatility identification. As a result, we prove convergence rates with respect to noise and a corresponding discrepancy-based choice for the regularization parameter. We conclude by illustrating the theoretical results by means of numerical tests.Comment: 23 pages, 5 figure
    • …
    corecore