116 research outputs found

    Convergence of the gradient method for ill-posed problems

    Full text link
    We study the convergence of the gradient descent method for solving ill-posed problems where the solution is characterized as a global minimum of a differentiable functional in a Hilbert space. The classical least-squares functional for nonlinear operator equations is a special instance of this framework and the gradient method then reduces to Landweber iteration. The main result of this article is a proof of weak and strong convergence under new nonlinearity conditions that generalize the classical tangential cone conditions

    Inversion formulas for the linearized impedance tomography problem

    Full text link
    We consider the linearized electrical impedance tomography problem in two dimensions on the unit disk. By a linearization around constant coefficients and using a trigonometric basis, we calculate the linearized Dirichlet-to-Neumann operator in terms of moments of the conduction coefficient of the problem. By expanding this coefficient into angular trigonometric functions and Legendre-M\"untz polynomials in radial coordinates, we can find a lower-triangular representation of the parameter to data mapping. As a consequence, we find an explicit solution formula for the corresponding inverse problem. Furthermore, we also consider the problem with boundary data given only on parts of the boundary while setting homogeneous Dirichlet values on the rest. We show that the conduction coefficient is uniquely determined from incomplete data of the linearized Dirichlet-to-Neumann operator with an explicit solution formula provided

    Optimization of the shape (and topology) of the initial conditions for diffusion parameter identification

    Full text link
    The design of an experiment, e.g., the setting of initial conditions, strongly influences the accuracy of the whole process of determining model parameters from data. We impose a sensitivity-based approach for choosing optimal design variables and study the optimization of the shape (and topology) of the initial conditions for an inverse problem of a diffusion parameter identification. Our approach, although case independent, is illustrated at the FRAP (Fluorescence Recovery After Photobleaching) experimental technique. The core idea resides in the maximization of a sensitivity measure, which depends on a specific experimental setting of initial conditions. By a numerical optimization, we find an interesting pattern of increasingly complicated (with respect to connectivity) optimal initial shapes. The proposed modification of the FRAP experimental protocol is rather surprising but entirely realistic and the resulting enhancement of the parameter estimate accuracy is significant

    The Kurdyka-\L{}ojasiewicz inequality as regularity condition

    Full text link
    We show that a Kurdyka-\L{}ojasiewicz (KL) inequality can be used as regularity condition for Tikhonov regularization with linear operators in Banach spaces. In fact, we prove the equivalence of a KL inequality and various known regularity conditions (variational inequality, rate conditions, and others) that are utilized for postulating smoothness conditions to obtain convergence rates. Case examples of rate estimates for Tikhonov regularization with source conditions or with conditional stability estimate illustrate the theoretical result

    Analysis and Approximation of the Canonical Polyadic Tensor Decomposition

    Full text link
    We study the least-squares (LS) functional of the canonical polyadic (CP) tensor decomposition. Our approach is based on the elimination of one factor matrix which results in a reduced functional. The reduced functional is reformulated into a projection framework and into a Rayleigh quotient. An analysis of this functional leads to several conclusions: new sufficient conditions for the existence of minimizers of the LS functional, the existence of a critical point in the rank-one case, a heuristic explanation of "swamping" and computable bounds on the minimal value of the LS functional. The latter result leads to a simple algorithm -- the Centroid Projection algorithm -- to compute suboptimal solutions of tensor decompositions. These suboptimal solutions are applied to iterative CP algorithms as initial guesses, yielding a method called centroid projection for canonical polyadic (CPCP) decomposition which provides a significant speedup in our numerical experiments compared to the standard methods

    Heuristic Parameter Choice Rules for Tikhonov Regularisation with Weakly Bounded Noise

    Full text link
    We study the choice of the regularisation parameter for linear ill-posed problems in the presence of noise that is possibly unbounded but only finite in a weaker norm, and when the noise-level is unknown. For this task, we analyse several heuristic parameter choice rules, such as the quasi-optimality, heuristic discrepancy, and Hanke-Raus rules and adapt the latter two to the weakly bounded noise case. We prove convergence and convergence rates under certain noise conditions. Moreover, we analyse and provide conditions for the convergence of the parameter choice by the generalised cross-validation and predictive mean-square error rules.Comment: 18 page

    Convergence of Heuristic Parameter Choice Rules for Convex Tikhonov Regularisation

    Full text link
    We investigate the convergence theory of several known as well as new heuristic parameter choice rules for convex Tikhonov regularisation. The success of such methods is dependent on whether certain restrictions on the noise are satisfied. In the linear theory, such conditions are well understood and hold for typically irregular noise. In this paper, we extend the convergence analysis of heuristic rules using noise restrictions to the convex setting and prove convergence of the aforementioned methods therewith. The convergence theory is exemplified for the case of an ill-posed problem with a diagonal forward operator in β„“q\ell^q spaces. Numerical examples also provide further insight.Comment: 32 pages, 5 figure

    Towards analytical model optimization in atmospheric tomography

    Full text link
    Modern ground-based telescopes rely on a technology called adaptive optics (AO) in order to compensate for the loss of image quality caused by atmospheric turbulence. Next-generation AO systems designed for a wide field of view require a stable and high-resolution reconstruction of the refractive index fluctuations in the atmosphere. By introducing a novel Bayesian method, we address the problem of estimating an atmospheric turbulence strength profile and reconstructing the refractive index fluctuations simultaneously, where we only use wavefront measurements of incoming light from guide stars. Most importantly, we demonstrate how this method can be used for model optimization as well. We propose two different algorithms for solving the maximum a posteriori estimate: the first approach is based on alternating minimization and has the advantage of integrability into existing atmospheric tomography methods. In the second approach, we formulate a convex non-differentiable optimization problem, which is solved by an iterative thresholding method. This approach clearly illustrates the underlying sparsity-enforcing mechanism for the strength profile. By introducing a tuning/regularization parameter, an automated model reduction of the layer structure of the atmosphere is achieved. Using numerical simulations, we demonstrate the performance of our method in practice

    On Accelerating the Regularized Alternating Least Square Algorithm for Tensors

    Full text link
    In this paper, we discuss the acceleration of the regularized alternating least square (RALS) algorithm for tensor approximation. We propose a fast iterative method using a Aitken-Stefensen like updates for the regularized algorithm. Through numerical experiments, the fast algorithm demonstrate a faster convergence rate for the accelerated version in comparison to both the standard and regularized alternating least squares algorithms. In addition, we analyze the global convergence based on the Kurdyka- Lojasiewicz inequality as well as show that the RALS algorithm has a linear local convergence rate

    Some Convergence Results on the Regularized Alternating Least-Squares Method for Tensor Decomposition

    Full text link
    We study the convergence of the Regularized Alternating Least-Squares algorithm for tensor decompositions. As a main result, we have shown that given the existence of critical points of the Alternating Least-Squares method, the limit points of the converging subsequences of the RALS are the critical points of the least squares cost functional. Some numerical examples indicate a faster convergence rate for the RALS in comparison to the usual alternating least squares method
    • …
    corecore