4,987 research outputs found
Semi-proximal Mirror-Prox for Nonsmooth Composite Minimization
We propose a new first-order optimisation algorithm to solve high-dimensional
non-smooth composite minimisation problems. Typical examples of such problems
have an objective that decomposes into a non-smooth empirical risk part and a
non-smooth regularisation penalty. The proposed algorithm, called Semi-Proximal
Mirror-Prox, leverages the Fenchel-type representation of one part of the
objective while handling the other part of the objective via linear
minimization over the domain. The algorithm stands in contrast with more
classical proximal gradient algorithms with smoothing, which require the
computation of proximal operators at each iteration and can therefore be
impractical for high-dimensional problems. We establish the theoretical
convergence rate of Semi-Proximal Mirror-Prox, which exhibits the optimal
complexity bounds, i.e. , for the number of calls to linear
minimization oracle. We present promising experimental results showing the
interest of the approach in comparison to competing methods
Deflation for semismooth equations
Variational inequalities can in general support distinct solutions. In this
paper we study an algorithm for computing distinct solutions of a variational
inequality, without varying the initial guess supplied to the solver. The
central idea is the combination of a semismooth Newton method with a deflation
operator that eliminates known solutions from consideration. Given one root of
a semismooth residual, deflation constructs a new problem for which a
semismooth Newton method will not converge to the known root, even from the
same initial guess. This enables the discovery of other roots. We prove the
effectiveness of the deflation technique under the same assumptions that
guarantee locally superlinear convergence of a semismooth Newton method. We
demonstrate its utility on various finite- and infinite-dimensional examples
drawn from constrained optimization, game theory, economics and solid
mechanics.Comment: 24 pages, 3 figure
An L1 Penalty Method for General Obstacle Problems
We construct an efficient numerical scheme for solving obstacle problems in
divergence form. The numerical method is based on a reformulation of the
obstacle in terms of an L1-like penalty on the variational problem. The
reformulation is an exact regularizer in the sense that for large (but finite)
penalty parameter, we recover the exact solution. Our formulation is applied to
classical elliptic obstacle problems as well as some related free boundary
problems, for example the two-phase membrane problem and the Hele-Shaw model.
One advantage of the proposed method is that the free boundary inherent in the
obstacle problem arises naturally in our energy minimization without any need
for problem specific or complicated discretization. In addition, our scheme
also works for nonlinear variational inequalities arising from convex
minimization problems.Comment: 20 pages, 18 figure
Differential-Algebraic Equations and Beyond: From Smooth to Nonsmooth Constrained Dynamical Systems
The present article presents a summarizing view at differential-algebraic
equations (DAEs) and analyzes how new application fields and corresponding
mathematical models lead to innovations both in theory and in numerical
analysis for this problem class. Recent numerical methods for nonsmooth
dynamical systems subject to unilateral contact and friction illustrate the
topicality of this development.Comment: Preprint of Book Chapte
A Smooth Primal-Dual Optimization Framework for Nonsmooth Composite Convex Minimization
We propose a new first-order primal-dual optimization framework for a convex
optimization template with broad applications. Our optimization algorithms
feature optimal convergence guarantees under a variety of common structure
assumptions on the problem template. Our analysis relies on a novel combination
of three classic ideas applied to the primal-dual gap function: smoothing,
acceleration, and homotopy. The algorithms due to the new approach achieve the
best known convergence rate results, in particular when the template consists
of only non-smooth functions. We also outline a restart strategy for the
acceleration to significantly enhance the practical performance. We demonstrate
relations with the augmented Lagrangian method and show how to exploit the
strongly convex objectives with rigorous convergence rate guarantees. We
provide numerical evidence with two examples and illustrate that the new
methods can outperform the state-of-the-art, including Chambolle-Pock, and the
alternating direction method-of-multipliers algorithms.Comment: 35 pages, accepted for publication on SIAM J. Optimization. Tech.
Report, Oct. 2015 (last update Sept. 2016
- …