29,118 research outputs found
Adaptive Bound Optimization for Online Convex Optimization
We introduce a new online convex optimization algorithm that adaptively
chooses its regularization function based on the loss functions observed so
far. This is in contrast to previous algorithms that use a fixed regularization
function such as L2-squared, and modify it only via a single time-dependent
parameter. Our algorithm's regret bounds are worst-case optimal, and for
certain realistic classes of loss functions they are much better than existing
bounds. These bounds are problem-dependent, which means they can exploit the
structure of the actual problem instance. Critically, however, our algorithm
does not need to know this structure in advance. Rather, we prove competitive
guarantees that show the algorithm provides a bound within a constant factor of
the best possible bound (of a certain functional form) in hindsight.Comment: Updates to match final COLT versio
Efficient Online Convex Optimization with Adaptively Minimax Optimal Dynamic Regret
We introduce an online convex optimization algorithm using projected
sub-gradient descent with ideal adaptive learning rates, where each computation
is efficiently done in a sequential manner. For the first time in the
literature, this algorithm provides an adaptively minimax optimal dynamic
regret guarantee for a sequence of convex functions without any restrictions --
such as strong convexity, smoothness or even Lipschitz continuity -- against a
comparator decision sequence with bounded total successive changes. We show
optimality by generating the worst-case dynamic regret adaptive lower bound,
which constitutes of actual sub-gradient norms and matches with our guarantees.
We discuss the advantages of our algorithm as opposed to adaptive projection
with sub-gradient self outer products and also derive the extension for
independent learning in each decision coordinate separately. Additionally, we
demonstrate how to best preserve our guarantees when the bound on total
successive changes in the dynamic comparator sequence grows as time goes, in a
truly online manner.Comment: 10 pages, 1 figure, preprint, [v0] 201
Efficient online algorithms for fast-rate regret bounds under sparsity
International audienceWe consider the online convex optimization problem. In the setting of arbitrary sequences and finite set of parameters, we establish a new fast-rate quantile regret bound. Then we investigate the optimization into the L1-ball by discretizing the parameter space. Our algorithm is projection free and we propose an efficient solution by restarting the algorithm on adaptive discretization grids. In the adversarial setting, we develop an algorithm that achieves several rates of convergence with different dependences on the sparsity of the objective. In the i.i.d. setting, we establish new risk bounds that are adaptive to the sparsity of the problem and to the regularity of the risk (ranging from a rate 1 / √ T for general convex risk to 1 /T for strongly convex risk). These results generalize previous works on sparse online learning. They are obtained under a weak assumption on the risk (Łojasiewicz's assumption) that allows multiple optima which is crucial when dealing with degenerate situations
- …