408,202 research outputs found
Stochastic Block Mirror Descent Methods for Nonsmooth and Stochastic Optimization
In this paper, we present a new stochastic algorithm, namely the stochastic
block mirror descent (SBMD) method for solving large-scale nonsmooth and
stochastic optimization problems. The basic idea of this algorithm is to
incorporate the block-coordinate decomposition and an incremental block
averaging scheme into the classic (stochastic) mirror-descent method, in order
to significantly reduce the cost per iteration of the latter algorithm. We
establish the rate of convergence of the SBMD method along with its associated
large-deviation results for solving general nonsmooth and stochastic
optimization problems. We also introduce different variants of this method and
establish their rate of convergence for solving strongly convex, smooth, and
composite optimization problems, as well as certain nonconvex optimization
problems. To the best of our knowledge, all these developments related to the
SBMD methods are new in the stochastic optimization literature. Moreover, some
of our results also seem to be new for block coordinate descent methods for
deterministic optimization
DEA Problems under Geometrical or Probability Uncertainties of Sample Data
This paper discusses the theoretical and practical aspects of new methods for solving DEA problems under real-life geometrical uncertainty and probability uncertainty of sample data. The proposed minimax approach to solve problems with geometrical uncertainty of sample data involves an implementation of linear programming or minimax optimization, whereas the problems with probability uncertainty of sample data are solved through implementing of econometric and new stochastic optimization methods, using the stochastic frontier functions estimation.DEA, Sample data uncertainty, Linear programming, Minimax optimization, Stochastic optimization, Stochastic frontier functions
A Unified Analysis of Stochastic Optimization Methods Using Jump System Theory and Quadratic Constraints
We develop a simple routine unifying the analysis of several important
recently-developed stochastic optimization methods including SAGA, Finito, and
stochastic dual coordinate ascent (SDCA). First, we show an intrinsic
connection between stochastic optimization methods and dynamic jump systems,
and propose a general jump system model for stochastic optimization methods.
Our proposed model recovers SAGA, SDCA, Finito, and SAG as special cases. Then
we combine jump system theory with several simple quadratic inequalities to
derive sufficient conditions for convergence rate certifications of the
proposed jump system model under various assumptions (with or without
individual convexity, etc). The derived conditions are linear matrix
inequalities (LMIs) whose sizes roughly scale with the size of the training
set. We make use of the symmetry in the stochastic optimization methods and
reduce these LMIs to some equivalent small LMIs whose sizes are at most 3 by 3.
We solve these small LMIs to provide analytical proofs of new convergence rates
for SAGA, Finito and SDCA (with or without individual convexity). We also
explain why our proposed LMI fails in analyzing SAG. We reveal a key difference
between SAG and other methods, and briefly discuss how to extend our LMI
analysis for SAG. An advantage of our approach is that the proposed analysis
can be automated for a large class of stochastic methods under various
assumptions (with or without individual convexity, etc).Comment: To Appear in Proceedings of the Annual Conference on Learning Theory
(COLT) 201
Stochastic Frank-Wolfe Methods for Nonconvex Optimization
We study Frank-Wolfe methods for nonconvex stochastic and finite-sum
optimization problems. Frank-Wolfe methods (in the convex case) have gained
tremendous recent interest in machine learning and optimization communities due
to their projection-free property and their ability to exploit structured
constraints. However, our understanding of these algorithms in the nonconvex
setting is fairly limited. In this paper, we propose nonconvex stochastic
Frank-Wolfe methods and analyze their convergence properties. For objective
functions that decompose into a finite-sum, we leverage ideas from variance
reduction techniques for convex optimization to obtain new variance reduced
nonconvex Frank-Wolfe methods that have provably faster convergence than the
classical Frank-Wolfe method. Finally, we show that the faster convergence
rates of our variance reduced methods also translate into improved convergence
rates for the stochastic setting
- …
