288 research outputs found
Solving variational inequalities with Stochastic Mirror-Prox algorithm
In this paper we consider iterative methods for stochastic variational
inequalities (s.v.i.) with monotone operators. Our basic assumption is that the
operator possesses both smooth and nonsmooth components. Further, only noisy
observations of the problem data are available. We develop a novel Stochastic
Mirror-Prox (SMP) algorithm for solving s.v.i. and show that with the
convenient stepsize strategy it attains the optimal rates of convergence with
respect to the problem parameters. We apply the SMP algorithm to Stochastic
composite minimization and describe particular applications to Stochastic
Semidefinite Feasability problem and Eigenvalue minimization
International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book
The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions.
This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more
Riemannian Stochastic Gradient Method for Nested Composition Optimization
This work considers optimization of composition of functions in a nested form
over Riemannian manifolds where each function contains an expectation. This
type of problems is gaining popularity in applications such as policy
evaluation in reinforcement learning or model customization in meta-learning.
The standard Riemannian stochastic gradient methods for non-compositional
optimization cannot be directly applied as stochastic approximation of inner
functions create bias in the gradients of the outer functions. For two-level
composition optimization, we present a Riemannian Stochastic Composition
Gradient Descent (R-SCGD) method that finds an approximate stationary point,
with expected squared Riemannian gradient smaller than , in
calls to the stochastic gradient oracle of the outer
function and stochastic function and gradient oracles of the inner function.
Furthermore, we generalize the R-SCGD algorithms for problems with multi-level
nested compositional structures, with the same complexity of
for the first-order stochastic oracle. Finally, the performance of the R-SCGD
method is numerically evaluated over a policy evaluation problem in
reinforcement learning
Inertial Stochastic PALM (iSPALM) and Applications in Machine Learning
Inertial algorithms for minimizing nonsmooth and nonconvex functions as the
inertial proximal alternating linearized minimization algorithm (iPALM) have
demonstrated their superiority with respect to computation time over their non
inertial variants. In many problems in imaging and machine learning, the
objective functions have a special form involving huge data which encourage the
application of stochastic algorithms. While algorithms based on stochastic
gradient descent are still used in the majority of applications, recently also
stochastic algorithms for minimizing nonsmooth and nonconvex functions were
proposed. In this paper, we derive an inertial variant of a stochastic PALM
algorithm with variance-reduced gradient estimator, called iSPALM, and prove
linear convergence of the algorithm under certain assumptions. Our inertial
approach can be seen as generalization of momentum methods widely used to speed
up and stabilize optimization algorithms, in particular in machine learning, to
nonsmooth problems. Numerical experiments for learning the weights of a
so-called proximal neural network and the parameters of Student-t mixture
models show that our new algorithm outperforms both stochastic PALM and its
deterministic counterparts
Gradient Free Methods for Non-Smooth Convex Optimization with Heavy Tails on Convex Compact
Optimization problems, in which only the realization of a function or a
zeroth-order oracle is available, have many applications in practice. An
effective method for solving such problems is the approximation of the gradient
using sampling and finite differences of the function values. However, some
noise can be present in the zeroth-order oracle not allowing the exact
evaluation of the function value, and this noise can be stochastic or
adversarial. In this paper, we propose and study new easy-to-implement
algorithms that are optimal in terms of the number of oracle calls for solving
non-smooth optimization problems on a convex compact set with heavy-tailed
stochastic noise (random noise has -th bounded moment) and
adversarial noise. The first algorithm is based on the heavy-tail-resistant
mirror descent and uses special transformation functions that allow controlling
the tails of the noise distribution. The second algorithm is based on the
gradient clipping technique. The paper provides proof of algorithms'
convergence results in terms of high probability and in terms of expectation
when a convex function is minimized. For functions satisfying a -growth
condition, a faster algorithm is proposed using the restart technique.
Particular attention is paid to the question of how large the adversarial noise
can be so that the optimality and convergence of the algorithms is guaranteed
Gradient free methods for non-smooth convex optimization with heavy tails on convex compact
Optimization problems, in which only the realization of a function or a zeroth-order oracle is available, have many applications in practice. An effective method for solving such problems is the approximation of the gradient using sampling and finite differences of the function values. However, some noise can be present in the zeroth-order oracle not allowing the exact evaluation of the function value, and this noise can be stochastic or adversarial. In this paper, we propose and study new easy-to-implement algorithms that are optimal in terms of the number of oracle calls for solving non-smooth optimization problems on a convex compact set with heavy-tailed stochastic noise (random noise has (1+κ)-th bounded moment) and adversarial noise. The first algorithm is based on the heavy-tail-resistant mirror descent and uses special transformation functions that allow controlling the tails of the noise distribution. The second algorithm is based on the gradient clipping technique. The paper provides proof of algorithms' convergence results in terms of high probability and in terms of expectation when a convex function is minimized. For functions satisfying a r-growth condition, a faster algorithm is proposed using the restart technique. Particular attention is paid to the question of how large the adversarial noise can be so that the optimality and convergence of the algorithms is guaranteed
- …