39 research outputs found
Existence and solution methods for equilibria
Equilibrium problems provide a mathematical framework which includes optimization, variational inequalities, fixed-point and saddle point problems, and noncooperative games as particular cases. This general format received an increasing interest in the last decade mainly because many theoretical and algorithmic results developed for one of these models can be often extended to the others through the unifying language provided by this common format. This survey paper aims at covering the main results concerning the existence of equilibria and the solution methods for finding them
Ordered Variational Inequalities and Ordered Complementarity Problems in Banach Lattices
We introduce the concepts of ordered variational inequalities and ordered complementarity problems with both domain and range in Banach lattices. Then we apply the Fan-KKM theorem and KKM mappings to study the solvability of these problems
Ordered Variational Inequalities and Ordered Complementarity Problems in Banach Lattices
We introduce the concepts of ordered variational inequalities and ordered complementarity problems with both domain and range in Banach lattices. Then we apply the Fan-KKM theorem and KKM mappings to study the solvability of these problems
Variational inequalities and optimization problems
The main purpose of this thesis is to study weakly sharp solutions of a variational inequality and its dual problem. Based on these, we present finite convergence algorithms for solving a variational inequality problem and its dual problem. We also construct the connection between variational inequalities and engineering problems. We consider a variational inequality problem on a nonempty closed convex subset of R^n. In order to solve this variational inequality problem, we construct the equivalence between the solution set of a variational inequality and optimization problems by using two gap functions, one is the primal gap function and the other is the dual gap function. We give properties of these two gap functions. We discuss su�cient conditions for the subdifferentiability of the primal gap function of a variational inequality problem. Moreover, we characterize relations between the G^ateaux differentiabilities of primal and dual gap functions. We also build some results for the Lipschitz and locally Lipschitz properties of primal and dual gap functions as well. Afterwards, several su�cient conditions for the relevant mapping to be constant on the solution set of a variational inequality has been obtained, including the relations between solution sets of a variational inequality and its dual problem as well as the optimal solution sets to gap functions. Based on these, we characterize weak sharpness of the solution set of a variational inequality by its primal gap function g and its dual gap function G. In particular, we apply error bounds of g, G and g + G on C. We also construct finite convergence of algorithms for solving a variational inequality by considering the convergence of a local projection. We carry out these results in terms of the weak sharpness of solution sets of a variational inequality as well as the error bounds of gap functions of a variational inequality problem
Variational Inclusions with General Over-relaxed Proximal Point and Variational-like Inequalities with Densely Pseudomonotonicity
This dissertation focuses on the existence and uniqueness of the solutions of variational inclusion and variational inequality problems and then attempts to develop efficient algorithms to estimate numerical solutions for the problems. The dissertation consists a total of five chapters. Chapter 1 is an introduction to variational inequality problems, variational inclusion problems, monotone operators, and some basic definitions and preliminaries from convex analysis. Chapter 2 is a study of a general class of nonlinear implicit inclusion problems. The objective of this study is to explore how to omit the Lipschitz continuity condition by using an alternating approach to the proximal point algorithm to estimate the numerical solution of the implicit inclusion problems. In chapter 3 we introduce generalized densely relaxed ƞ - α pseudomonotone operators and generalized relaxed ƞ - α proper quasimonotone operators as well as relaxed ƞ - α quasimonotone operators. Using these generalized monotonicity notions, we establish the existence results for the generalized variational-like inequality in the general setting of Banach spaces. In chapter 4, we use the auxiliary principle technique to introduce a general algorithm for solutions of the densely relaxed pseudomonotone variational-like inequalities. Chapter 5 is the chapter concluding remarks and scope for future work
Recommended from our members
FROM OPTIMIZATION TO EQUILIBRATION: UNDERSTANDING AN EMERGING PARADIGM IN ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING
Many existing machine learning (ML) algorithms cannot be viewed as gradient descent on some single objective. The solution trajectories taken by these algorithms naturally exhibit rotation, sometimes forming cycles, a behavior that is not expected with (full-batch) gradient descent. However, these algorithms can be viewed more generally as solving for the equilibrium of a game with possibly multiple competing objectives. Moreover, some recent ML models, specifically generative adversarial networks (GANs) and its variants, are now explicitly formulated as equilibrium problems. Equilibrium problems present challenges beyond those encountered in optimization such as limit-cycles and chaotic attractors and are able to abstract away some of the difficulties encountered when training models like GANs.
In this thesis, I aim to advance our understanding of equilibrium problems so as to improve state-of-the-art in GANs and related domains. In the following chapters, I will present work on designing a no-regret framework for solving monotone equilibrium problems in online or streaming settings (with applications to Reinforcement Learning), ensuring convergence when training a GAN to fit a normal distribution to data by Crossing-the-Curl, improving state-of-the-art image generation with techniques derived from theory, and borrowing tools from dynamical systems theory for analyzing the complex dynamics of GAN training