153,156 research outputs found
Contextual Stochastic Bilevel Optimization
We introduce contextual stochastic bilevel optimization (CSBO) -- a
stochastic bilevel optimization framework with the lower-level problem
minimizing an expectation conditioned on some contextual information and the
upper-level decision variable. This framework extends classical stochastic
bilevel optimization when the lower-level decision maker responds optimally not
only to the decision of the upper-level decision maker but also to some side
information and when there are multiple or even infinite many followers. It
captures important applications such as meta-learning, personalized federated
learning, end-to-end learning, and Wasserstein distributionally robust
optimization with side information (WDRO-SI). Due to the presence of contextual
information, existing single-loop methods for classical stochastic bilevel
optimization are unable to converge. To overcome this challenge, we introduce
an efficient double-loop gradient method based on the Multilevel Monte-Carlo
(MLMC) technique and establish its sample and computational complexities. When
specialized to stochastic nonconvex optimization, our method matches existing
lower bounds. For meta-learning, the complexity of our method does not depend
on the number of tasks. Numerical experiments further validate our theoretical
results.Comment: The paper is accepted by NeurIPS 202
Robust optimization of control parameters for WEC arrays using stochastic methods
This work presents a new computational optimization framework for the robust
control of parks of Wave Energy Converters (WEC) in irregular waves. The power
of WEC parks is maximized with respect to the individual control damping and
stiffness coefficients of each device. The results are robust with respect to
the incident wave direction, which is treated as a random variable.
Hydrodynamic properties are computed using the linear potential model, and the
dynamics of the system is computed in the frequency domain. A slamming
constraint is enforced to ensure that the results are physically realistic. We
show that the stochastic optimization problem is well posed. Two optimization
approaches for dealing with stochasticity are then considered: stochastic
approximation and sample average approximation. The outcomes of the above
mentioned methods in terms of accuracy and computational time are presented.
The results of the optimization for complex and realistic array configurations
of possible engineering interest are then discussed. Results of extensive
numerical experiments demonstrate the efficiency of the proposed computational
framework
When can we improve on sample average approximation for stochastic optimization?
We explore the performance of sample average approximation in comparison with several other methods for stochastic optimization. The methods we evaluate are (a) bagging; (b) kernel density estimation; (c) maximum likelihood estimation; and (d) a Bayesian approach. We use two test sets: first a set of quadratic objective functions allowing different types of interaction between the random component and the univariate decision variable; and second a set of portfolio optimization problems. We make recommendations for effective approaches
Online Kernel Sliced Inverse Regression
Online dimension reduction is a common method for high-dimensional streaming
data processing. Online principal component analysis, online sliced inverse
regression, online kernel principal component analysis and other methods have
been studied in depth, but as far as we know, online supervised nonlinear
dimension reduction methods have not been fully studied. In this article, an
online kernel sliced inverse regression method is proposed. By introducing the
approximate linear dependence condition and dictionary variable sets, we
address the problem of increasing variable dimensions with the sample size in
the online kernel sliced inverse regression method, and propose a reduced-order
method for updating variables online. We then transform the problem into an
online generalized eigen-decomposition problem, and use the stochastic
optimization method to update the centered dimension reduction directions.
Simulations and the real data analysis show that our method can achieve close
performance to batch processing kernel sliced inverse regression
Data-driven satisficing measure and ranking
We propose an computational framework for real-time risk assessment and
prioritizing for random outcomes without prior information on probability
distributions. The basic model is built based on satisficing measure (SM) which
yields a single index for risk comparison. Since SM is a dual representation
for a family of risk measures, we consider problems constrained by general
convex risk measures and specifically by Conditional value-at-risk. Starting
from offline optimization, we apply sample average approximation technique and
argue the convergence rate and validation of optimal solutions. In online
stochastic optimization case, we develop primal-dual stochastic approximation
algorithms respectively for general risk constrained problems, and derive their
regret bounds. For both offline and online cases, we illustrate the
relationship between risk ranking accuracy with sample size (or iterations).Comment: 26 Pages, 6 Figure
- …