3,522 research outputs found
Global Convergence of Model Function Based Bregman Proximal Minimization Algorithms
Lipschitz continuity of the gradient mapping of a continuously differentiable
function plays a crucial role in designing various optimization algorithms.
However, many functions arising in practical applications such as low rank
matrix factorization or deep neural network problems do not have a Lipschitz
continuous gradient. This led to the development of a generalized notion known
as the -smad property, which is based on generalized proximity measures
called Bregman distances. However, the -smad property cannot handle
nonsmooth functions, for example, simple nonsmooth functions like \abs{x^4-1}
and also many practical composite problems are out of scope. We fix this issue
by proposing the MAP property, which generalizes the -smad property and is
also valid for a large class of nonconvex nonsmooth composite problems. Based
on the proposed MAP property, we propose a globally convergent algorithm called
Model BPG, that unifies several existing algorithms. The convergence analysis
is based on a new Lyapunov function. We also numerically illustrate the
superior performance of Model BPG on standard phase retrieval problems, robust
phase retrieval problems, and Poisson linear inverse problems, when compared to
a state of the art optimization method that is valid for generic nonconvex
nonsmooth optimization problems.Comment: 44 pages, 22 figure
Stochastic Block Mirror Descent Methods for Nonsmooth and Stochastic Optimization
In this paper, we present a new stochastic algorithm, namely the stochastic
block mirror descent (SBMD) method for solving large-scale nonsmooth and
stochastic optimization problems. The basic idea of this algorithm is to
incorporate the block-coordinate decomposition and an incremental block
averaging scheme into the classic (stochastic) mirror-descent method, in order
to significantly reduce the cost per iteration of the latter algorithm. We
establish the rate of convergence of the SBMD method along with its associated
large-deviation results for solving general nonsmooth and stochastic
optimization problems. We also introduce different variants of this method and
establish their rate of convergence for solving strongly convex, smooth, and
composite optimization problems, as well as certain nonconvex optimization
problems. To the best of our knowledge, all these developments related to the
SBMD methods are new in the stochastic optimization literature. Moreover, some
of our results also seem to be new for block coordinate descent methods for
deterministic optimization
- …