5,865 research outputs found
Online Optimization with Memory and Competitive Control
This paper presents competitive algorithms for a novel class of online optimization problems with memory. We consider a setting where the learner seeks to minimize the sum of a hitting cost and a switching cost that depends on the previous p decisions. This setting generalizes Smoothed Online Convex Optimization. The proposed approach, Optimistic Regularized Online Balanced Descent, achieves a constant, dimension-free competitive ratio. Further, we show a connection between online optimization with memory and online control with adversarial disturbances. This connection, in turn, leads to a new constant-competitive policy for a rich class of online control problems
Online optimization of storage ring nonlinear beam dynamics
We propose to optimize the nonlinear beam dynamics of existing and future
storage rings with direct online optimization techniques. This approach may
have crucial importance for the implementation of diffraction limited storage
rings. In this paper considerations and algorithms for the online optimization
approach are discussed. We have applied this approach to experimentally improve
the dynamic aperture of the SPEAR3 storage ring with the robust conjugate
direction search method and the particle swarm optimization method. The dynamic
aperture was improved by more than 5 mm within a short period of time.
Experimental setup and results are presented
Highly-Smooth Zero-th Order Online Optimization Vianney Perchet
The minimization of convex functions which are only available through partial
and noisy information is a key methodological problem in many disciplines. In
this paper we consider convex optimization with noisy zero-th order
information, that is noisy function evaluations at any desired point. We focus
on problems with high degrees of smoothness, such as logistic regression. We
show that as opposed to gradient-based algorithms, high-order smoothness may be
used to improve estimation rates, with a precise dependence of our upper-bounds
on the degree of smoothness. In particular, we show that for infinitely
differentiable functions, we recover the same dependence on sample size as
gradient-based algorithms, with an extra dimension-dependent factor. This is
done for both convex and strongly-convex functions, with finite horizon and
anytime algorithms. Finally, we also recover similar results in the online
optimization setting.Comment: Conference on Learning Theory (COLT), Jun 2016, New York, United
States. 201
- …