2 research outputs found

    Temporal aspects of adaptive online learning: continuity and representation

    Full text link
    Adaptive online learning, in a very broad sense, is the study of sequential decision making beyond the worst case. Compared to their classical minimax counterparts, adaptive algorithms typically require less manual tuning, while provably performing better in benign environments, or with prior knowledge. This dissertation presents new techniques for designing these algorithms. The central theme is the emphasis on the temporal nature of the problem, which has not received enough attention in the literature. The first part of the dissertation focuses on temporal continuity. While modern online learning almost exclusively studies a discrete time repeated game, it is shown that designing algorithms can be simplified, and in certain cases optimized, by scaling the game towards a continuous time limit and solving the obtained differential equation. Concretely, we develop comparator adaptive algorithms for Online Convex Optimization, achieving optimal static regret bounds in the vanilla setting and its variant with switching costs. The benefits are extended to another classical online learning problem called Learning with Expert Advice. The second part of the dissertation focuses on temporal representation. Different from the first part, here we consider the general objective of dynamic regret minimization, which forms the foundation of time series forecasting. It is shown that by introducing temporal features, the task can be transformed to static regret minimization on a user-specified representation space with growing dimension. Drawing novel connections to wavelet features, we develop a simple algorithm improving the state-of-the-art dynamic regret bound achieved by more sophisticated approaches. An application is the online fine-tuning of a black-box time series forecaster
    corecore