306 research outputs found

    A regularized variance-reduced modified extragradient method for stochastic hierarchical games

    Full text link
    The theory of learning in games has so far focused mainly on games with simultaneous moves. Recently, researchers in machine learning have started investigating learning dynamics in games involving hierarchical decision-making. We consider an NN-player hierarchical game in which the iith player's objective comprises of an expectation-valued term, parametrized by rival decisions, and a hierarchical term. Such a framework allows for capturing a broad range of stochastic hierarchical optimization problems, Stackelberg equilibrium problems, and leader-follower games. We develop an iteratively regularized and smoothed variance-reduced modified extragradient framework for learning hierarchical equilibria in a stochastic setting. We equip our analysis with rate statements, complexity guarantees, and almost-sure convergence claims. We then extend these statements to settings where the lower-level problem is solved inexactly and provide the corresponding rate and complexity statements

    Generalized Forward-Backward Splitting

    Full text link
    This paper introduces the generalized forward-backward splitting algorithm for minimizing convex functions of the form F+∑i=1nGiF + \sum_{i=1}^n G_i, where FF has a Lipschitz-continuous gradient and the GiG_i's are simple in the sense that their Moreau proximity operators are easy to compute. While the forward-backward algorithm cannot deal with more than n=1n = 1 non-smooth function, our method generalizes it to the case of arbitrary nn. Our method makes an explicit use of the regularity of FF in the forward step, and the proximity operators of the GiG_i's are applied in parallel in the backward step. This allows the generalized forward backward to efficiently address an important class of convex problems. We prove its convergence in infinite dimension, and its robustness to errors on the computation of the proximity operators and of the gradient of FF. Examples on inverse problems in imaging demonstrate the advantage of the proposed methods in comparison to other splitting algorithms.Comment: 24 pages, 4 figure

    Projection-proximal methods for general variational inequalities

    Get PDF
    AbstractIn this paper, we consider and analyze some new projection-proximal methods for solving general variational inequalities. The modified methods converge for pseudomonotone operators which is a weaker condition than monotonicity. The proposed methods include several new and known methods as special cases. Our results can be considered as a novel and important extension of the previously known results. Since the general variational inequalities include the quasi-variational inequalities and implicit complementarity problems as special cases, results proved in this paper continue to hold for these problems

    First order algorithms in variational image processing

    Get PDF
    Variational methods in imaging are nowadays developing towards a quite universal and flexible tool, allowing for highly successful approaches on tasks like denoising, deblurring, inpainting, segmentation, super-resolution, disparity, and optical flow estimation. The overall structure of such approaches is of the form D(Ku)+αR(u)→min⁥u{\cal D}(Ku) + \alpha {\cal R} (u) \rightarrow \min_u ; where the functional D{\cal D} is a data fidelity term also depending on some input data ff and measuring the deviation of KuKu from such and R{\cal R} is a regularization functional. Moreover KK is a (often linear) forward operator modeling the dependence of data on an underlying image, and α\alpha is a positive regularization parameter. While D{\cal D} is often smooth and (strictly) convex, the current practice almost exclusively uses nonsmooth regularization functionals. The majority of successful techniques is using nonsmooth and convex functionals like the total variation and generalizations thereof or ℓ1\ell_1-norms of coefficients arising from scalar products with some frame system. The efficient solution of such variational problems in imaging demands for appropriate algorithms. Taking into account the specific structure as a sum of two very different terms to be minimized, splitting algorithms are a quite canonical choice. Consequently this field has revived the interest in techniques like operator splittings or augmented Lagrangians. Here we shall provide an overview of methods currently developed and recent results as well as some computational studies providing a comparison of different methods and also illustrating their success in applications.Comment: 60 pages, 33 figure

    International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book

    Get PDF
    The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions. This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more

    Smooth Monotone Stochastic Variational Inequalities and Saddle Point Problems -- Survey

    Full text link
    This paper is a survey of methods for solving smooth (strongly) monotone stochastic variational inequalities. To begin with, we give the deterministic foundation from which the stochastic methods eventually evolved. Then we review methods for the general stochastic formulation, and look at the finite sum setup. The last parts of the paper are devoted to various recent (not necessarily stochastic) advances in algorithms for variational inequalities.Comment: 12 page

    Variational Inclusions with General Over-relaxed Proximal Point and Variational-like Inequalities with Densely Pseudomonotonicity

    Get PDF
    This dissertation focuses on the existence and uniqueness of the solutions of variational inclusion and variational inequality problems and then attempts to develop efficient algorithms to estimate numerical solutions for the problems. The dissertation consists a total of five chapters. Chapter 1 is an introduction to variational inequality problems, variational inclusion problems, monotone operators, and some basic definitions and preliminaries from convex analysis. Chapter 2 is a study of a general class of nonlinear implicit inclusion problems. The objective of this study is to explore how to omit the Lipschitz continuity condition by using an alternating approach to the proximal point algorithm to estimate the numerical solution of the implicit inclusion problems. In chapter 3 we introduce generalized densely relaxed ƞ - α pseudomonotone operators and generalized relaxed ƞ - α proper quasimonotone operators as well as relaxed ƞ - α quasimonotone operators. Using these generalized monotonicity notions, we establish the existence results for the generalized variational-like inequality in the general setting of Banach spaces. In chapter 4, we use the auxiliary principle technique to introduce a general algorithm for solutions of the densely relaxed pseudomonotone variational-like inequalities. Chapter 5 is the chapter concluding remarks and scope for future work

    Iterative Methods for the Elasticity Imaging Inverse Problem

    Get PDF
    Cancers of the soft tissue reign among the deadliest diseases throughout the world and effective treatments for such cancers rely on early and accurate detection of tumors within the interior of the body. One such diagnostic tool, known as elasticity imaging or elastography, uses measurements of tissue displacement to reconstruct the variable elasticity between healthy and unhealthy tissue inside the body. This gives rise to a challenging parameter identification inverse problem, that of identifying the LamĂ© parameter ÎŒ in a system of partial differential equations in linear elasticity. Due to the near incompressibility of human tissue, however, common techniques for solving the direct and inverse problems are rendered ineffective due to a phenomenon known as the “locking effect”. Alternative methods, such as mixed finite element methods, must be applied to overcome this complication. Using these methods, this work reposes the problem as a generalized saddle point problem along with a presentation of several optimization formulations, including the modified output least squares (MOLS), energy output least squares (EOLS), and equation error (EE) frameworks, for solving the elasticity imaging inverse problem. Subsequently, numerous iterative optimization methods, including gradient, extragradient, and proximal point methods, are explored and applied to solve the related optimization problem. Implementations of all of the iterative techniques under consideration are applied to all of the developed optimization frameworks using a representative numerical example in elasticity imaging. A thorough analysis and comparison of the methods is subsequently presented
    • 

    corecore