197 research outputs found
Distributionally Robust Learning with Weakly Convex Losses: Convergence Rates and Finite-Sample Guarantees
We consider a distributionally robust stochastic optimization problem and
formulate it as a stochastic two-level composition optimization problem with
the use of the mean--semideviation risk measure. In this setting, we consider a
single time-scale algorithm, involving two versions of the inner function value
tracking: linearized tracking of a continuously differentiable loss function,
and SPIDER tracking of a weakly convex loss function. We adopt the norm of the
gradient of the Moreau envelope as our measure of stationarity and show that
the sample complexity of is possible in both
cases, with only the constant larger in the second case. Finally, we
demonstrate the performance of our algorithm with a robust learning example and
a weakly convex, non-smooth regression example
Data-driven satisficing measure and ranking
We propose an computational framework for real-time risk assessment and
prioritizing for random outcomes without prior information on probability
distributions. The basic model is built based on satisficing measure (SM) which
yields a single index for risk comparison. Since SM is a dual representation
for a family of risk measures, we consider problems constrained by general
convex risk measures and specifically by Conditional value-at-risk. Starting
from offline optimization, we apply sample average approximation technique and
argue the convergence rate and validation of optimal solutions. In online
stochastic optimization case, we develop primal-dual stochastic approximation
algorithms respectively for general risk constrained problems, and derive their
regret bounds. For both offline and online cases, we illustrate the
relationship between risk ranking accuracy with sample size (or iterations).Comment: 26 Pages, 6 Figure
- …