2,251 research outputs found
Sample-path solutions for simulation optimization problems and stochastic variational inequalities
inequality;simulation;optimization
Subsampling Algorithms for Semidefinite Programming
We derive a stochastic gradient algorithm for semidefinite optimization using
randomization techniques. The algorithm uses subsampling to reduce the
computational cost of each iteration and the subsampling ratio explicitly
controls granularity, i.e. the tradeoff between cost per iteration and total
number of iterations. Furthermore, the total computational cost is directly
proportional to the complexity (i.e. rank) of the solution. We study numerical
performance on some large-scale problems arising in statistical learning.Comment: Final version, to appear in Stochastic System
International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book
The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions.
This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more
State-constrained Optimization Problems under Uncertainty: A Tensor Train Approach
We propose an algorithm to solve optimization problems constrained by partial
(ordinary) differential equations under uncertainty, with almost sure
constraints on the state variable. To alleviate the computational burden of
high-dimensional random variables, we approximate all random fields by the
tensor-train decomposition. To enable efficient tensor-train approximation of
the state constraints, the latter are handled using the Moreau-Yosida penalty,
with an additional smoothing of the positive part (plus/ReLU) function by a
softplus function. We derive theoretical bounds on the constraint violation in
terms of the Moreau-Yosida regularization parameter and smoothing width of the
softplus function. This result also proposes a practical recipe for selecting
these two parameters. When the optimization problem is strongly convex, we
establish strong convergence of the regularized solution to the optimal
control. We develop a second order Newton type method with a fast matrix-free
action of the approximate Hessian to solve the smoothed Moreau-Yosida problem.
This algorithm is tested on benchmark elliptic problems with random
coefficients, optimization problems constrained by random elliptic variational
inequalities, and a real-world epidemiological model with 20 random variables.
These examples demonstrate mild (at most polynomial) scaling with respect to
the dimension and regularization parameters.Comment: 29 page
- …