96 research outputs found
On the Convergence of (Stochastic) Gradient Descent with Extrapolation for Non-Convex Optimization
Extrapolation is a well-known technique for solving convex optimization and
variational inequalities and recently attracts some attention for non-convex
optimization. Several recent works have empirically shown its success in some
machine learning tasks. However, it has not been analyzed for non-convex
minimization and there still remains a gap between the theory and the practice.
In this paper, we analyze gradient descent and stochastic gradient descent with
extrapolation for finding an approximate first-order stationary point in smooth
non-convex optimization problems. Our convergence upper bounds show that the
algorithms with extrapolation can be accelerated than without extrapolation
Semi-Anchored Multi-Step Gradient Descent Ascent Method for Structured Nonconvex-Nonconcave Composite Minimax Problems
Minimax problems, such as generative adversarial network, adversarial
training, and fair training, are widely solved by a multi-step gradient descent
ascent (MGDA) method in practice. However, its convergence guarantee is
limited. In this paper, inspired by the primal-dual hybrid gradient method, we
propose a new semi-anchoring (SA) technique for the MGDA method. This makes the
MGDA method find a stationary point of a structured nonconvex-nonconcave
composite minimax problem; its saddle-subdifferential operator satisfies the
weak Minty variational inequality condition. The resulting method, named
SA-MGDA, is built upon a Bregman proximal point method. We further develop its
backtracking line-search version, and its non-Euclidean version for smooth
adaptable functions. Numerical experiments, including a fair classification
training, are provided
Iterative Methods for the Elasticity Imaging Inverse Problem
Cancers of the soft tissue reign among the deadliest diseases throughout the world and effective treatments for such cancers rely on early and accurate detection of tumors within the interior of the body. One such diagnostic tool, known as elasticity imaging or elastography, uses measurements of tissue displacement to reconstruct the variable elasticity between healthy and unhealthy tissue inside the body. This gives rise to a challenging parameter identification inverse problem, that of identifying the Lamé parameter μ in a system of partial differential equations in linear elasticity. Due to the near incompressibility of human tissue, however, common techniques for solving the direct and inverse problems are rendered ineffective due to a phenomenon known as the “locking effect”. Alternative methods, such as mixed finite element methods, must be applied to overcome this complication. Using these methods, this work reposes the problem as a generalized saddle point problem along with a presentation of several optimization formulations, including the modified output least squares (MOLS), energy output least squares (EOLS), and equation error (EE) frameworks, for solving the elasticity imaging inverse problem. Subsequently, numerous iterative optimization methods, including gradient, extragradient, and proximal point methods, are explored and applied to solve the related optimization problem. Implementations of all of the iterative techniques under consideration are applied to all of the developed optimization frameworks using a representative numerical example in elasticity imaging. A thorough analysis and comparison of the methods is subsequently presented
Frank-Wolfe Algorithms for Saddle Point Problems
We extend the Frank-Wolfe (FW) optimization algorithm to solve constrained
smooth convex-concave saddle point (SP) problems. Remarkably, the method only
requires access to linear minimization oracles. Leveraging recent advances in
FW optimization, we provide the first proof of convergence of a FW-type saddle
point solver over polytopes, thereby partially answering a 30 year-old
conjecture. We also survey other convergence results and highlight gaps in the
theoretical underpinnings of FW-style algorithms. Motivating applications
without known efficient alternatives are explored through structured prediction
with combinatorial penalties as well as games over matching polytopes involving
an exponential number of constraints.Comment: Appears in: Proceedings of the 20th International Conference on
Artificial Intelligence and Statistics (AISTATS 2017). 39 page
- …