290 research outputs found

    An efficient projection-type method for monotone variational inequalities in Hilbert spaces

    Get PDF
    We consider the monotone variational inequality problem in a Hilbert space and describe a projection-type method with inertial terms under the following properties: (a) The method generates a strongly convergent iteration sequence; (b) The method requires, at each iteration, only one projection onto the feasible set and two evaluations of the operator; (c) The method is designed for variational inequality for which the underline operator is monotone and uniformly continuous; (d) The method includes an inertial term. The latter is also shown to speed up the convergence in our numerical results. A comparison with some related methods is given and indicates that the new method is promising

    Extragradient method with feasible inexact projection to variational inequality problem

    Full text link
    The variational inequality problem in finite-dimensional Euclidean space is addressed in this paper, and two inexact variants of the extragradient method are proposed to solve it. Instead of computing exact projections on the constraint set, as in previous versions extragradient method, the proposed methods compute feasible inexact projections on the constraint set using a relative error criterion. The first version of the proposed method provided is a counterpart to the classic form of the extragradient method with constant steps. In order to establish its convergence we need to assume that the operator is pseudo-monotone and Lipschitz continuous, as in the standard approach. For the second version, instead of a fixed step size, the method presented finds a suitable step size in each iteration by performing a line search. Like the classical extragradient method, the proposed method does just two projections into the feasible set in each iteration. A full convergence analysis is provided, with no Lipschitz continuity assumption of the operator defining the variational inequality problem

    Adaptive proximal algorithms for convex optimization under local Lipschitz continuity of the gradient

    Full text link
    Backtracking linesearch is the de facto approach for minimizing continuously differentiable functions with locally Lipschitz gradient. In recent years, it has been shown that in the convex setting it is possible to avoid linesearch altogether, and to allow the stepsize to adapt based on a local smoothness estimate without any backtracks or evaluations of the function value. In this work we propose an adaptive proximal gradient method, adaPG, that uses novel estimates of the local smoothness modulus which leads to less conservative stepsize updates and that can additionally cope with nonsmooth terms. This idea is extended to the primal-dual setting where an adaptive three term primal-dual algorithm, adaPD, is proposed which can be viewed as an extension of the PDHG method. Moreover, in this setting the ``essentially'' fully adaptive variant adaPD+^+ is proposed that avoids evaluating the linear operator norm by invoking a backtracking procedure, that, remarkably, does not require extra gradient evaluations. Numerical simulations demonstrate the effectiveness of the proposed algorithms compared to the state of the art

    Weak convergence for variational inequalities with inertial-type method

    Get PDF
    Weak convergence of inertial iterative method for solving variational inequalities is the focus of this paper. The cost function is assumed to be non-Lipschitz and monotone. We propose a projection-type method with inertial terms and give weak convergence analysis under appropriate conditions. Some test results are performed and compared with relevant methods in the literature to show the efficiency and advantages given by our proposed methods

    Computing Algorithm for an Equilibrium of the Generalized Stackelberg Game

    Full text link
    The 1−N1-N generalized Stackelberg game (single-leader multi-follower game) is intricately intertwined with the interaction between a leader and followers (hierarchical interaction) and the interaction among followers (simultaneous interaction). However, obtaining the optimal strategy of the leader is generally challenging due to the complex interactions among the leader and followers. Here, we propose a general methodology to find a generalized Stackelberg equilibrium of a 1−N1-N generalized Stackelberg game. Specifically, we first provide the conditions where a generalized Stackelberg equilibrium always exists using the variational equilibrium concept. Next, to find an equilibrium in polynomial time, we transformed the 1−N1-N generalized Stackelberg game into a 1−11-1 Stackelberg game whose Stackelberg equilibrium is identical to that of the original. Finally, we propose an effective computation procedure based on the projected implicit gradient descent algorithm to find a Stackelberg equilibrium of the transformed 1−11-1 Stackelberg game. We validate the proposed approaches using the two problems of deriving operating strategies for EV charging stations: (1) the first problem is optimizing the one-time charging price for EV users, in which a platform operator determines the price of electricity and EV users determine the optimal amount of charging for their satisfaction; and (2) the second problem is to determine the spatially varying charging price to optimally balance the demand and supply over every charging station.Comment: 37 pages, 10 figure

    Convergence Analysis and Improvements for Projection Algorithms and Splitting Methods

    Get PDF
    Non-smooth convex optimization problems occur in all fields of engineering. A common approach to solving this class of problems is proximal algorithms, or splitting methods. These first-order optimization algorithms are often simple, well suited to solve large-scale problems and have a low computational cost per iteration. Essentially, they encode the solution to an optimization problem as a fixed point of some operator, and iterating this operator eventually results in convergence to an optimal point. However, as for other first order methods, the convergence rate is heavily dependent on the conditioning of the problem. Even though the per-iteration cost is usually low, the number of iterations can become prohibitively large for ill-conditioned problems, especially if a high accuracy solution is sought.In this thesis, a few methods for alleviating this slow convergence are studied, which can be divided into two main approaches. The first are heuristic methods that can be applied to a range of fixed-point algorithms. They are based on understanding typical behavior of these algorithms. While these methods are shown to converge, they come with no guarantees on improved convergence rates.The other approach studies the theoretical rates of a class of projection methods that are used to solve convex feasibility problems. These are problems where the goal is to find a point in the intersection of two, or possibly more, convex sets. A study of how the parameters in the algorithm affect the theoretical convergence rate is presented, as well as how they can be chosen to optimize this rate
    • …
    corecore