19 research outputs found
Universality of Bayesian mixture predictors
The problem is that of sequential probability forecasting for finite-valued
time series. The data is generated by an unknown probability distribution over
the space of all one-way infinite sequences. It is known that this measure
belongs to a given set C, but the latter is completely arbitrary (uncountably
infinite, without any structure given). The performance is measured with
asymptotic average log loss. In this work it is shown that the minimax
asymptotic performance is always attainable, and it is attained by a convex
combination of a countably many measures from the set C (a Bayesian mixture).
This was previously only known for the case when the best achievable asymptotic
error is 0. This also contrasts previous results that show that in the
non-realizable case all Bayesian mixtures may be suboptimal, while there is a
predictor that achieves the optimal performance
Online Learning of k-CNF Boolean Functions
This paper revisits the problem of learning a k-CNF Boolean function from
examples in the context of online learning under the logarithmic loss. In doing
so, we give a Bayesian interpretation to one of Valiant's celebrated PAC
learning algorithms, which we then build upon to derive two efficient, online,
probabilistic, supervised learning algorithms for predicting the output of an
unknown k-CNF Boolean function. We analyze the loss of our methods, and show
that the cumulative log-loss can be upper bounded, ignoring logarithmic
factors, by a polynomial function of the size of each example.Comment: 20 LaTeX pages. 2 Algorithms. Some Theorem
Online Learning for Changing Environments using Coin Betting
A key challenge in online learning is that classical algorithms can be slow
to adapt to changing environments. Recent studies have proposed "meta"
algorithms that convert any online learning algorithm to one that is adaptive
to changing environments, where the adaptivity is analyzed in a quantity called
the strongly-adaptive regret. This paper describes a new meta algorithm that
has a strongly-adaptive regret bound that is a factor of
better than other algorithms with the same time complexity, where is the
time horizon. We also extend our algorithm to achieve a first-order (i.e.,
dependent on the observed losses) strongly-adaptive regret bound for the first
time, to our knowledge. At its heart is a new parameter-free algorithm for the
learning with expert advice (LEA) problem in which experts sometimes do not
output advice for consecutive time steps (i.e., \emph{sleeping} experts). This
algorithm is derived by a reduction from optimal algorithms for the so-called
coin betting problem. Empirical results show that our algorithm outperforms
state-of-the-art methods in both learning with expert advice and metric
learning scenarios.Comment: submitted to a journal. arXiv admin note: substantial text overlap
with arXiv:1610.0457
Universal Codes from Switching Strategies
We discuss algorithms for combining sequential prediction strategies, a task
which can be viewed as a natural generalisation of the concept of universal
coding. We describe a graphical language based on Hidden Markov Models for
defining prediction strategies, and we provide both existing and new models as
examples. The models include efficient, parameterless models for switching
between the input strategies over time, including a model for the case where
switches tend to occur in clusters, and finally a new model for the scenario
where the prediction strategies have a known relationship, and where jumps are
typically between strongly related ones. This last model is relevant for coding
time series data where parameter drift is expected. As theoretical ontributions
we introduce an interpolation construction that is useful in the development
and analysis of new algorithms, and we establish a new sophisticated lemma for
analysing the individual sequence regret of parameterised models