414 research outputs found
An Inexact Primal-Dual Smoothing Framework for Large-Scale Non-Bilinear Saddle Point Problems
We develop an inexact primal-dual first-order smoothing framework to solve a
class of non-bilinear saddle point problems with primal strong convexity.
Compared with existing methods, our framework yields a significant improvement
over the primal oracle complexity, while it has competitive dual oracle
complexity. In addition, we consider the situation where the primal-dual
coupling term has a large number of component functions. To efficiently handle
this situation, we develop a randomized version of our smoothing framework,
which allows the primal and dual sub-problems in each iteration to be solved by
randomized algorithms inexactly in expectation. The convergence of this
framework is analyzed both in expectation and with high probability. In terms
of the primal and dual oracle complexities, this framework significantly
improves over its deterministic counterpart. As an important application, we
adapt both frameworks for solving convex optimization problems with many
functional constraints. To obtain an -optimal and
-feasible solution, both frameworks achieve the best-known oracle
complexities (in terms of their dependence on )
Sample Complexity of Sample Average Approximation for Conditional Stochastic Optimization
In this paper, we study a class of stochastic optimization problems, referred
to as the \emph{Conditional Stochastic Optimization} (CSO), in the form of
\min_{x \in \mathcal{X}}
\EE_{\xi}f_\xi\Big({\EE_{\eta|\xi}[g_\eta(x,\xi)]}\Big), which finds a wide
spectrum of applications including portfolio selection, reinforcement learning,
robust learning, causal inference and so on. Assuming availability of samples
from the distribution \PP(\xi) and samples from the conditional distribution
\PP(\eta|\xi), we establish the sample complexity of the sample average
approximation (SAA) for CSO, under a variety of structural assumptions, such as
Lipschitz continuity, smoothness, and error bound conditions. We show that the
total sample complexity improves from \cO(d/\eps^4) to \cO(d/\eps^3) when
assuming smoothness of the outer function, and further to \cO(1/\eps^2) when
the empirical function satisfies the quadratic growth condition. We also
establish the sample complexity of a modified SAA, when and are
independent. Several numerical experiments further support our theoretical
findings.
Keywords: stochastic optimization, sample average approximation, large
deviations theoryComment: Typo corrected. Reference added. Revision comments handle
Bounding Optimality Gap in Stochastic Optimization via Bagging: Statistical Efficiency and Stability
We study a statistical method to estimate the optimal value, and the
optimality gap of a given solution for stochastic optimization as an assessment
of the solution quality. Our approach is based on bootstrap aggregating, or
bagging, resampled sample average approximation (SAA). We show how this
approach leads to valid statistical confidence bounds for non-smooth
optimization. We also demonstrate its statistical efficiency and stability that
are especially desirable in limited-data situations, and compare these
properties with some existing methods. We present our theory that views SAA as
a kernel in an infinite-order symmetric statistic, which can be approximated
via bagging. We substantiate our theoretical findings with numerical results
- …