1,069 research outputs found

    Optimization Methods for Inverse Problems

    Full text link
    Optimization plays an important role in solving many inverse problems. Indeed, the task of inversion often either involves or is fully cast as a solution of an optimization problem. In this light, the mere non-linear, non-convex, and large-scale nature of many of these inversions gives rise to some very challenging optimization problems. The inverse problem community has long been developing various techniques for solving such optimization tasks. However, other, seemingly disjoint communities, such as that of machine learning, have developed, almost in parallel, interesting alternative methods which might have stayed under the radar of the inverse problem community. In this survey, we aim to change that. In doing so, we first discuss current state-of-the-art optimization methods widely used in inverse problems. We then survey recent related advances in addressing similar challenges in problems faced by the machine learning community, and discuss their potential advantages for solving inverse problems. By highlighting the similarities among the optimization challenges faced by the inverse problem and the machine learning communities, we hope that this survey can serve as a bridge in bringing together these two communities and encourage cross fertilization of ideas.Comment: 13 page

    An Interior-Point algorithm for Nonlinear Minimax Problems

    Get PDF
    We present a primal-dual interior-point method for constrained nonlinear, discrete minimax problems where the objective functions and constraints are not necessarily convex. The algorithm uses two merit functions to ensure progress toward the points satisfying the first-order optimality conditions of the original problem. Convergence properties are described and numerical results provided.Discrete min-max, Constrained nonlinear programming, Primal-dual interior-point methods, Stepsize strategies.

    Randomized Lagrangian Stochastic Approximation for Large-Scale Constrained Stochastic Nash Games

    Full text link
    In this paper, we consider stochastic monotone Nash games where each player's strategy set is characterized by possibly a large number of explicit convex constraint inequalities. Notably, the functional constraints of each player may depend on the strategies of other players, allowing for capturing a subclass of generalized Nash equilibrium problems (GNEP). While there is limited work that provide guarantees for this class of stochastic GNEPs, even when the functional constraints of the players are independent of each other, the majority of the existing methods rely on employing projected stochastic approximation (SA) methods. However, the projected SA methods perform poorly when the constraint set is afflicted by the presence of a large number of possibly nonlinear functional inequalities. Motivated by the absence of performance guarantees for computing the Nash equilibrium in constrained stochastic monotone Nash games, we develop a single timescale randomized Lagrangian multiplier stochastic approximation method where in the primal space, we employ an SA scheme, and in the dual space, we employ a randomized block-coordinate scheme where only a randomly selected Lagrangian multiplier is updated. We show that our method achieves a convergence rate of O(logā”(k)k)\mathcal{O}\left(\frac{\log(k)}{\sqrt{k}}\right) for suitably defined suboptimality and infeasibility metrics in a mean sense.Comment: The result of this paper has been presented at International Conference on Continuous Optimization (ICCOPT) 2022 and East Coast Optimization Meeting (ECOM) 202
    • ā€¦
    corecore