142 research outputs found

    New classes of exponentially general nonconvex variational inequalities

    Get PDF
    In this paper, some new classes of exponentially general nonconvex variational inequalities are introduced and investigated. Several special cases are discussed as applications of these nonconvex variational inequalities. Projection technique is used to establish the equivalence between the non covex variational inequalities and fixed point problem. This equivalent formulation is used to discuss the existence of the solution. Several inertial type methods are suggested and analyzed for solving exponentially general nonconvex variational inequalities. using the technique of the projection operator and dynamical systems. Convergence analysis of the iterative methods is analyzed under suitable and appropriate weak conditions. In this sense, our result can be viewed as improvement and refinement of the previously known results. Our methods of proof are very simple as compared with other techniques

    Improved guarantees for optimal Nash equilibrium seeking and bilevel variational inequalities

    Full text link
    We consider a class of hierarchical variational inequality (VI) problems that subsumes VI-constrained optimization and several other important problem classes including the optimal solution selection problem, the optimal Nash equilibrium (NE) seeking problem, and the generalized NE seeking problem. Our main contributions are threefold. (i) We consider bilevel VIs with merely monotone and Lipschitz continuous mappings and devise a single-timescale iteratively regularized extragradient method (IR-EG). We improve the existing iteration complexity results for addressing both bilevel VI and VI-constrained convex optimization problems. (ii) Under the strong monotonicity of the outer level mapping, we develop a variant of IR-EG, called R-EG, and derive significantly faster guarantees than those in (i). These results appear to be new for both bilevel VIs and VI-constrained optimization. (iii) To our knowledge, complexity guarantees for computing the optimal NE in nonconvex settings do not exist. Motivated by this lacuna, we consider VI-constrained nonconvex optimization problems and devise an inexactly-projected gradient method, called IPR-EG, where the projection onto the unknown set of equilibria is performed using R-EG with prescribed adaptive termination criterion and regularization parameters. We obtain new complexity guarantees in terms of a residual map and an infeasibility metric for computing a stationary point. We validate the theoretical findings using preliminary numerical experiments for computing the best and the worst Nash equilibria

    Inégalités de Kurdyka-Lojasiewicz et convexité : algorithmes et applications

    Get PDF
    Cette thèse traite des méthodes de descente d’ordre un pour les problèmes de minimisation. Elle comprend trois parties. Dans la première partie, nous apportons une vue d’ensemble des bornes d’erreur et les premières briques d’unification d’un concept. Nous montrons en effet la place centrale de l’inégalité du gradient de Lojasiewicz, en mettant en relation cette inégalité avec les bornes d’erreur. Dans la seconde partie, en usant de l’inégalité de Kurdyka-Lojasiewicz (KL), nous apportons un nouvel outil pour calculer la complexité des m´méthodes de descente d’ordre un pour la minimisation convexe. Notre approche est totalement originale et utilise une suite proximale “worst-case” unidimensionnelle. Ces résultats introduisent une méthodologie simple : trouver une borne d’erreur, calculer la fonction KL désingularisante quand c’est possible, identifier les constantes pertinentes dans la méthode de descente, et puis calculer la complexité en usant de la suite proximale “worst-case” unidimensionnelle. Enfin, nous étendons la méthode extragradient pour minimiser la somme de deux fonctions, la première étant lisse et la seconde convexe. Sous l’hypothèse de l’inégalité KL, nous montrons que la suite produite par la méthode extragradient converge vers un point critique de ce problème et qu’elle est de longueur finie. Quand les deux fonctions sont convexes, nous donnons la vitesse de convergence O(1/k) qui est classique pour la méthode de gradient. De plus, nous montrons que notre complexité de la seconde partie peut être appliquée à cette méthode. Considérer la méthode extragradient est l’occasion de d´écrire la recherche linéaire exacte pour les méthodes de décomposition proximales. Nous donnons des détails pour l’implémentation de ce programme pour le problème des moindres carrés avec régularisation ℓ1 et nous donnons des résultats numériques qui suggèrent que combiner des méthodes non-accélérées avec la recherche linéaire exacte peut être un choix performant.This thesis focuses on first order descent methods in the minimization problems. There are three parts. Firstly, we give an overview on local and global error bounds. We try to provide the first bricks of a unified theory by showing the centrality of the Lojasiewicz gradient inequality. In the second part, by using Kurdyka- Lojasiewicz (KL) inequality, we provide new tools to compute the complexity of first-order descent methods in convex minimization. Our approach is completely original and makes use of a one-dimensional worst-case proximal sequence. This result inaugurates a simple methodology: derive an error bound, compute the KL esingularizing function whenever possible, identify essential constants in the descent method and finally compute the complexity using the one-dimensional worst case proximal sequence. Lastly, we extend the extragradient method to minimize the sum of two functions, the first one being smooth and the second being convex. Under Kurdyka-Lojasiewicz assumption, we prove that the sequence produced by the extragradient method converges to a critical point of this problem and has finite length. When both functions are convex, we provide a O(1/k) convergence rate. Furthermore, we show that our complexity result in the second part can be applied to this method. Considering the extragradient method is the occasion to describe exact line search for proximal decomposition methods. We provide details for the implementation of this scheme for the ℓ1 regularized least squares problem and give numerical results which suggest that combining nonaccelerated methods with exact line search can be a competitive choice
    • …
    corecore