47 research outputs found

    Hybrid Algorithms for Solving Variational Inequalities, Variational Inclusions, Mixed Equilibria, and Fixed Point Problems

    Get PDF
    We present a hybrid iterative algorithm for finding a common element of the set of solutions of a finite family of generalized mixed equilibrium problems, the set of solutions of a finite family of variational inequalities for inverse strong monotone mappings, the set of fixed points of an infinite family of nonexpansive mappings, and the set of solutions of a variational inclusion in a real Hilbert space. Furthermore, we prove that the proposed hybrid iterative algorithm has strong convergence under some mild conditions imposed on algorithm parameters. Here, our hybrid algorithm is based on Korpelevič’s extragradient method, hybrid steepest-descent method, and viscosity approximation method

    A new iterative method for generalized equilibrium and constrained convex minimization problems

    Get PDF
    The gradient-projection algorithm (GPA) plays an important role in solving constrained convex minimization problems. In this paper, we combine the GPA and averaged mapping approach to propose an explicit composite iterative scheme for finding a common solution of a generalized equilibrium problem and a constrained convex minimization problem. Then, we prove a strong convergence theorem which improves and extends some recent results

    A viscosity of CesĂ ro mean approximation method for split generalized equilibrium, variational inequality and fixed point problems

    Get PDF
    In this paper, we introduce and study a iterative viscosity approximation method by modify CesĂ ro mean approximation for finding a common solution of split generalized equilibrium, variational inequality and fixed point problems. Under suitable conditions, we prove a strong convergence theorem for the sequences generated by the proposed iterative scheme. The results presented in this paper generalize, extend and improve the corresponding results of Shimizu an

    Contraction-mapping algorithm for the equilibrium problem over the fixed point set of a nonexpansive semigroup

    Get PDF
    In this paper, we consider the proximal mapping of a bifunction. Under the Lipschitz-type and the strong monotonicity conditions, we prove that the proximal mapping is contractive. Based on this result, we construct an iterative process for solving the equilibrium problem over the fixed point sets of a nonexpansive semigroup and prove a weak convergence theorem for this algorithm. Also, some preliminary numerical experiments and comparisons are presented. First Published Online: 21 Nov 201

    Triple Hierarchical Variational Inequalities with Constraints of Mixed Equilibria, Variational Inequalities, Convex Minimization, and Hierarchical Fixed Point Problems

    Get PDF
    We introduce and analyze a hybrid iterative algorithm by virtue of Korpelevich's extragradient method, viscosity approximation method, hybrid steepest-descent method, and averaged mapping approach to the gradient-projection algorithm. It is proven that under appropriate assumptions, the proposed algorithm converges strongly to a common element of the fixed point set of infinitely many nonexpansive mappings, the solution set of finitely many generalized mixed equilibrium problems (GMEPs), the solution set of finitely many variational inequality problems (VIPs), the solution set of general system of variational inequalities (GSVI), and the set of minimizers of convex minimization problem (CMP), which is just a unique solution of a triple hierarchical variational inequality (THVI) in a real Hilbert space. In addition, we also consider the application of the proposed algorithm to solve a hierarchical fixed point problem with constraints of finitely many GMEPs, finitely many VIPs, GSVI, and CMP. The results obtained in this paper improve and extend the corresponding results announced by many others

    On the resolution of misspecification in stochastic optimization, variational inequality, and game-theoretic problems

    Get PDF
    Traditionally, much of the research in the field of optimization algorithms has assumed that problem parameters are correctly specified. Recent efforts under the robust optimization framework have relaxed this assumption by allowing unknown parameters to vary in a prescribed uncertainty set and by subsequently solving for a worst-case solution. This dissertation considers a rather different approach in which the unknown or misspecified parameter is a solution to a suitably defined (stochastic) learning problem based on having access to a set of samples. Practical approaches in resolving such a set of coupled problems have been either sequential or direct variational approaches. In the case of the former, this entails the following steps: (i) a solution to the learning problem for parameters is first obtained; and (ii) a solution is obtained to the associated parametrized computational problem by using (i). Such avenues prove difficult to adopt particularly since the learning process has to be terminated finitely and consequently, in large-scale or stochastic instances, sequential approaches may often be corrupted by error. On the other hand, a variational approach requires that the problem may be recast as a possibly non-monotone stochastic variational inequality problem; but there are no known first-order (stochastic) schemes currently available for the solution of such problems. Motivated by these challenges, this thesis focuses on studying joint schemes of optimization and learning in three settings: (i) misspecified stochastic optimization and variational inequality problems, (ii) misspecified stochastic Nash games, (iii) misspecified Markov decision processes. In the first part of this thesis, we present a coupled stochastic approximation scheme which simultaneously solves both the optimization and the learning problems. The obtained schemes are shown to be equipped with almost sure convergence properties in regimes when the function ff is either strongly convex as well as merely convex. Importantly, the scheme displays the optimal rate for strongly convex problems while in merely convex regimes, through an averaging approach, we quantify the degradation associated with learning by noting that the error in function value after KK steps is O(ln⁡(K)/K)O(\sqrt{\ln(K)/K}), rather than O(1/K)O(\sqrt{1/K}) when θ∗\theta^* is available. Notably, when the averaging window is modified suitably, it can be see that the original rate of O(1/K)O(\sqrt{1/K}) is recovered. Additionally, we consider an online counterpart of the misspecified optimization problem and provide a non-asymptotic bound on the average regret with respect to an offline counterpart. We also extend these statements to a class of stochastic variational inequality problems, an object that unifies stochastic convex optimization problems and a range of stochastic equilibrium problems. Analogous almost-sure convergence statements are provided in strongly monotone and merely monotone regimes, the latter facilitated by using an iterative Tikhonov regularization. In the merely monotone regime, under a weak-sharpness requirement, we quantify the degradation associated with learning and show that expected error associated with dist(xk,X∗)dist(x_k,X^*) is O(ln⁡(K)/K)O(\sqrt{\ln(K)/K}). In the second part of this thesis, we present schemes for computing equilibria to two classes of convex stochastic Nash games complicated by a parametric misspecification, a natural concern in the control of large- scale networked engineered system. In both schemes, players learn the equilibrium strategy while resolving the misspecification: (1) Stochastic Nash games: We present a set of coupled stochastic approximation distributed schemes distributed across agents in which the first scheme updates each agent’s strategy via a projected (stochastic) gradient step while the second scheme updates every agent’s belief regarding its misspecified parameter using an independently specified learning problem. We proceed to show that the produced sequences converge to the true equilibrium strategy and the true parameter in an almost sure sense. Surprisingly, convergence in the equilibrium strategy achieves the optimal rate of convergence in a mean-squared sense with a quantifiable degradation in the rate constant; (2) Stochastic Nash-Cournot games with unobservable aggregate output: We refine (1) to a Cournot setting where we assume that the tuple of strategies is unobservable while payoff functions and strategy sets are public knowledge through a common knowledge assumption. By utilizing observations of noise-corrupted prices, iterative fixed-point schemes are developed, allowing for simultaneously learning the equilibrium strategies and the misspecified parameter in an almost-sure sense. In the third part of this thesis, we consider the solution of a finite-state infinite horizon Markov Decision Process (MDP) in which both the transition matrix and the cost function are misspecified, the latter in a parametric sense. We consider a data-driven regime in which the learning problem is a stochastic convex optimization problem that resolves misspecification. Via such a framework, we make the following contributions: (1) We first show that a misspecified value iteration scheme converges almost surely to its true counterpart and the mean-squared error after KK iterations is O(1/K)O(\sqrt{1/K}); (2) An analogous asymptotic almost-sure convergence statement is provided for misspecified policy iteration; and (3) Finally, we present a constant steplength misspecified Q-learning scheme and show that a suitable error metric is O(1/K)O(\sqrt{1/K}) + O(δ)O(\sqrt{δ}) after K iterations where δ is a bound on the steplength
    corecore