3,178 research outputs found
Sensitivity Analysis for Mirror-Stratifiable Convex Functions
This paper provides a set of sensitivity analysis and activity identification
results for a class of convex functions with a strong geometric structure, that
we coined "mirror-stratifiable". These functions are such that there is a
bijection between a primal and a dual stratification of the space into
partitioning sets, called strata. This pairing is crucial to track the strata
that are identifiable by solutions of parametrized optimization problems or by
iterates of optimization algorithms. This class of functions encompasses all
regularizers routinely used in signal and image processing, machine learning,
and statistics. We show that this "mirror-stratifiable" structure enjoys a nice
sensitivity theory, allowing us to study stability of solutions of optimization
problems to small perturbations, as well as activity identification of
first-order proximal splitting-type algorithms. Existing results in the
literature typically assume that, under a non-degeneracy condition, the active
set associated to a minimizer is stable to small perturbations and is
identified in finite time by optimization schemes. In contrast, our results do
not require any non-degeneracy assumption: in consequence, the optimal active
set is not necessarily stable anymore, but we are able to track precisely the
set of identifiable strata.We show that these results have crucial implications
when solving challenging ill-posed inverse problems via regularization, a
typical scenario where the non-degeneracy condition is not fulfilled. Our
theoretical results, illustrated by numerical simulations, allow to
characterize the instability behaviour of the regularized solutions, by
locating the set of all low-dimensional strata that can be potentially
identified by these solutions
GMRES-Accelerated ADMM for Quadratic Objectives
We consider the sequence acceleration problem for the alternating direction
method-of-multipliers (ADMM) applied to a class of equality-constrained
problems with strongly convex quadratic objectives, which frequently arise as
the Newton subproblem of interior-point methods. Within this context, the ADMM
update equations are linear, the iterates are confined within a Krylov
subspace, and the General Minimum RESidual (GMRES) algorithm is optimal in its
ability to accelerate convergence. The basic ADMM method solves a
-conditioned problem in iterations. We give
theoretical justification and numerical evidence that the GMRES-accelerated
variant consistently solves the same problem in iterations
for an order-of-magnitude reduction in iterations, despite a worst-case bound
of iterations. The method is shown to be competitive against
standard preconditioned Krylov subspace methods for saddle-point problems. The
method is embedded within SeDuMi, a popular open-source solver for conic
optimization written in MATLAB, and used to solve many large-scale semidefinite
programs with error that decreases like , instead of ,
where is the iteration index.Comment: 31 pages, 7 figures. Accepted for publication in SIAM Journal on
Optimization (SIOPT
Optimization algorithms for the solution of the frictionless normal contact between rough surfaces
This paper revisits the fundamental equations for the solution of the
frictionless unilateral normal contact problem between a rough rigid surface
and a linear elastic half-plane using the boundary element method (BEM). After
recasting the resulting Linear Complementarity Problem (LCP) as a convex
quadratic program (QP) with nonnegative constraints, different optimization
algorithms are compared for its solution: (i) a Greedy method, based on
different solvers for the unconstrained linear system (Conjugate Gradient CG,
Gauss-Seidel, Cholesky factorization), (ii) a constrained CG algorithm, (iii)
the Alternating Direction Method of Multipliers (ADMM), and () the
Non-Negative Least Squares (NNLS) algorithm, possibly warm-started by
accelerated gradient projection steps or taking advantage of a loading history.
The latter method is two orders of magnitude faster than the Greedy CG method
and one order of magnitude faster than the constrained CG algorithm. Finally,
we propose another type of warm start based on a refined criterion for the
identification of the initial trial contact domain that can be used in
conjunction with all the previous optimization algorithms. This method, called
Cascade Multi-Resolution (CMR), takes advantage of physical considerations
regarding the scaling of the contact predictions by changing the surface
resolution. The method is very efficient and accurate when applied to real or
numerically generated rough surfaces, provided that their power spectral
density function is of power-law type, as in case of self-similar fractal
surfaces.Comment: 38 pages, 11 figure
- …