263 research outputs found

    Zeroth-Order Methods for Convex-Concave Minmax Problems: Applications to Decision-Dependent Risk Minimization

    Full text link
    Min-max optimization is emerging as a key framework for analyzing problems of robustness to strategically and adversarially generated data. We propose a random reshuffling-based gradient free Optimistic Gradient Descent-Ascent algorithm for solving convex-concave min-max problems with finite sum structure. We prove that the algorithm enjoys the same convergence rate as that of zeroth-order algorithms for convex minimization problems. We further specialize the algorithm to solve distributionally robust, decision-dependent learning problems, where gradient information is not readily available. Through illustrative simulations, we observe that our proposed approach learns models that are simultaneously robust against adversarial distribution shifts and strategic decisions from the data sources, and outperforms existing methods from the strategic classification literature.Comment: 32 pages, 5 figure

    EXPERIMENTAL EVALUATION OF ITERATIVE METHODS FOR GAMES

    Get PDF
    Min-max optimization problems are a class of problems that are usually seen in game theory, machine learning, deep learning, and adversarial training. Deterministic gradient methods, such as gradient descent ascent (GDA), Extragradient (EG), and Hamiltonian Gradient Descent (HGD) are usually implemented to solve those problems. In large-scale setting, stochastic variants of those gradient methods are prefer because of their cheap per iteration cost. To further increase optimization efficiency, different improvements of deterministic and stochastic gradient methods are proposed, such as acceleration, variance reduction, and random reshuffling. In this work, we explore advanced iterative methods for solving min-max optimization problems, including deterministic gradient methods combined with accelerated methods and stochastic gradient methods combined with variance reduction and random reshuffling. We use experiments to evaluate the performance of the classical and advanced iterative methods on both bilinear and quadratic games. With an experimental approach, we show that the most advanced iterative methods in the deterministic and stochastic setting have improvements in iteration complexity
    • …
    corecore