78 research outputs found

    Numerical identification of a variable parameter in 2d elliptic boundary value problem by extragradient methods

    Get PDF
    This work focuses on the inverse problem of identifying a variable parameter in a 2-D scalar elliptic boundary value problem. It is well-known that this inverse problem is highly ill-posed and regularization is necessary for its stable solution. The inverse problem is studied in an optimization framework, which is the most suitable framework for incorporating regularization. This optimization problem is a constrained optimization problem where the constraint set is a closed and convex set of the admissible coefficients. As an objective functional, we use both the output least squares and modified output least squares functionals. It is known that the most commonly used iterative schemes for such problems require strong monotonicity of the objective functionals derivative. In the context of the considered inverse problem, this is a very stringent requirement and is achieved through a careful selection of the regularization parameter. In contrast, extragradient type methods only require the derivative of the objective functional to be monotone and this allows a greater flexibility for the selection of the regularization parameter. In this work, we use the finite element method for the discretization of the inverse problem and apply the most commonly studied extragradient methods

    Iterative Methods for the Elasticity Imaging Inverse Problem

    Get PDF
    Cancers of the soft tissue reign among the deadliest diseases throughout the world and effective treatments for such cancers rely on early and accurate detection of tumors within the interior of the body. One such diagnostic tool, known as elasticity imaging or elastography, uses measurements of tissue displacement to reconstruct the variable elasticity between healthy and unhealthy tissue inside the body. This gives rise to a challenging parameter identification inverse problem, that of identifying the Lamé parameter μ in a system of partial differential equations in linear elasticity. Due to the near incompressibility of human tissue, however, common techniques for solving the direct and inverse problems are rendered ineffective due to a phenomenon known as the “locking effect”. Alternative methods, such as mixed finite element methods, must be applied to overcome this complication. Using these methods, this work reposes the problem as a generalized saddle point problem along with a presentation of several optimization formulations, including the modified output least squares (MOLS), energy output least squares (EOLS), and equation error (EE) frameworks, for solving the elasticity imaging inverse problem. Subsequently, numerous iterative optimization methods, including gradient, extragradient, and proximal point methods, are explored and applied to solve the related optimization problem. Implementations of all of the iterative techniques under consideration are applied to all of the developed optimization frameworks using a representative numerical example in elasticity imaging. A thorough analysis and comparison of the methods is subsequently presented

    Generalized Forward-Backward Splitting

    Full text link
    This paper introduces the generalized forward-backward splitting algorithm for minimizing convex functions of the form F+i=1nGiF + \sum_{i=1}^n G_i, where FF has a Lipschitz-continuous gradient and the GiG_i's are simple in the sense that their Moreau proximity operators are easy to compute. While the forward-backward algorithm cannot deal with more than n=1n = 1 non-smooth function, our method generalizes it to the case of arbitrary nn. Our method makes an explicit use of the regularity of FF in the forward step, and the proximity operators of the GiG_i's are applied in parallel in the backward step. This allows the generalized forward backward to efficiently address an important class of convex problems. We prove its convergence in infinite dimension, and its robustness to errors on the computation of the proximity operators and of the gradient of FF. Examples on inverse problems in imaging demonstrate the advantage of the proposed methods in comparison to other splitting algorithms.Comment: 24 pages, 4 figure

    Randomized Lagrangian Stochastic Approximation for Large-Scale Constrained Stochastic Nash Games

    Full text link
    In this paper, we consider stochastic monotone Nash games where each player's strategy set is characterized by possibly a large number of explicit convex constraint inequalities. Notably, the functional constraints of each player may depend on the strategies of other players, allowing for capturing a subclass of generalized Nash equilibrium problems (GNEP). While there is limited work that provide guarantees for this class of stochastic GNEPs, even when the functional constraints of the players are independent of each other, the majority of the existing methods rely on employing projected stochastic approximation (SA) methods. However, the projected SA methods perform poorly when the constraint set is afflicted by the presence of a large number of possibly nonlinear functional inequalities. Motivated by the absence of performance guarantees for computing the Nash equilibrium in constrained stochastic monotone Nash games, we develop a single timescale randomized Lagrangian multiplier stochastic approximation method where in the primal space, we employ an SA scheme, and in the dual space, we employ a randomized block-coordinate scheme where only a randomly selected Lagrangian multiplier is updated. We show that our method achieves a convergence rate of O(log(k)k)\mathcal{O}\left(\frac{\log(k)}{\sqrt{k}}\right) for suitably defined suboptimality and infeasibility metrics in a mean sense.Comment: The result of this paper has been presented at International Conference on Continuous Optimization (ICCOPT) 2022 and East Coast Optimization Meeting (ECOM) 202
    corecore