13,676 research outputs found
Stochastic Block Mirror Descent Methods for Nonsmooth and Stochastic Optimization
In this paper, we present a new stochastic algorithm, namely the stochastic
block mirror descent (SBMD) method for solving large-scale nonsmooth and
stochastic optimization problems. The basic idea of this algorithm is to
incorporate the block-coordinate decomposition and an incremental block
averaging scheme into the classic (stochastic) mirror-descent method, in order
to significantly reduce the cost per iteration of the latter algorithm. We
establish the rate of convergence of the SBMD method along with its associated
large-deviation results for solving general nonsmooth and stochastic
optimization problems. We also introduce different variants of this method and
establish their rate of convergence for solving strongly convex, smooth, and
composite optimization problems, as well as certain nonconvex optimization
problems. To the best of our knowledge, all these developments related to the
SBMD methods are new in the stochastic optimization literature. Moreover, some
of our results also seem to be new for block coordinate descent methods for
deterministic optimization
A successive difference-of-convex approximation method for a class of nonconvex nonsmooth optimization problems
We consider a class of nonconvex nonsmooth optimization problems whose
objective is the sum of a smooth function and a finite number of nonnegative
proper closed possibly nonsmooth functions (whose proximal mappings are easy to
compute), some of which are further composed with linear maps. This kind of
problems arises naturally in various applications when different regularizers
are introduced for inducing simultaneous structures in the solutions. Solving
these problems, however, can be challenging because of the coupled nonsmooth
functions: the corresponding proximal mapping can be hard to compute so that
standard first-order methods such as the proximal gradient algorithm cannot be
applied efficiently. In this paper, we propose a successive
difference-of-convex approximation method for solving this kind of problems. In
this algorithm, we approximate the nonsmooth functions by their Moreau
envelopes in each iteration. Making use of the simple observation that Moreau
envelopes of nonnegative proper closed functions are continuous {\em
difference-of-convex} functions, we can then approximately minimize the
approximation function by first-order methods with suitable majorization
techniques. These first-order methods can be implemented efficiently thanks to
the fact that the proximal mapping of {\em each} nonsmooth function is easy to
compute. Under suitable assumptions, we prove that the sequence generated by
our method is bounded and any accumulation point is a stationary point of the
objective. We also discuss how our method can be applied to concrete
applications such as nonconvex fused regularized optimization problems and
simultaneously structured matrix optimization problems, and illustrate the
performance numerically for these two specific applications
Lagrange optimality system for a class of nonsmooth convex optimization
In this paper, we revisit the augmented Lagrangian method for a class of
nonsmooth convex optimization. We present the Lagrange optimality system of the
augmented Lagrangian associated with the problems, and establish its
connections with the standard optimality condition and the saddle point
condition of the augmented Lagrangian, which provides a powerful tool for
developing numerical algorithms. We apply a linear Newton method to the
Lagrange optimality system to obtain a novel algorithm applicable to a variety
of nonsmooth convex optimization problems arising in practical applications.
Under suitable conditions, we prove the nonsingularity of the Newton system and
the local convergence of the algorithm.Comment: 19 page
Barrier subgradient method
In this paper we develop a new primal-dual subgradient method for nonsmooth convex optimization problems. This scheme is based on a self-concordant barrier for the basic feasible set. It is suitable for finding approximate solutions with certain relative accuracy. We discuss some applications of this technique including fractional covering problem, maximal concurrent flow problem, semidefinite relaxations and nonlinear online optimization.convex optimization, subgradient methods, non-smooth optimization, minimax problems, saddle points, variational inequalities, stochastic optimization, black-box methods, lower complexity bounds.
- …