Lipschitz and comparator-norm adaptivity in online learning

Abstract

We study Online Convex Optimization in the unbounded setting where neither predictions nor gradient are constrained. The goal is to simultaneously adapt to both the sequence of gradients and the comparator. We first develop parameter-free and scale-free algorithms for a simplified setting with hints. We present two versions: the first adapts to the squared norms of both comparator and gradients separately using O(d)O(d) time per round, the second adapts to their squared inner products (which measure variance only in the comparator direction) in time O(d3)O(d^3) per round. We then generalize two prior reducti

    Similar works