8 research outputs found

    Generalised Mixability, Constant Regret, and Bayesian Updating

    Full text link
    Mixability of a loss is known to characterise when constant regret bounds are achievable in games of prediction with expert advice through the use of Vovk's aggregating algorithm. We provide a new interpretation of mixability via convex analysis that highlights the role of the Kullback-Leibler divergence in its definition. This naturally generalises to what we call Φ\Phi-mixability where the Bregman divergence DΦD_\Phi replaces the KL divergence. We prove that losses that are Φ\Phi-mixable also enjoy constant regret bounds via a generalised aggregating algorithm that is similar to mirror descent.Comment: 12 page

    Proceedings of the Fifth Workshop on Information Theoretic Methods in Science and Engineering

    Get PDF
    These are the online proceedings of the Fifth Workshop on Information Theoretic Methods in Science and Engineering (WITMSE), which was held in the Trippenhuis, Amsterdam, in August 2012

    Better predictions when models are wrong or underspecified

    Get PDF
    Many statistical methods rely on models of reality in order to learn from data and to make predictions about future data. By necessity, these models usually do not match reality exactly, but are either wrong (none of the hypotheses in the model provides an accurate description of reality) or underspecified (the hypotheses in the model describe only part of the data). In this thesis, we discuss three scenarios involving models that are wrong or underspecified. In each case, we find that standard statistical methods may fail, sometimes dramatically, and present different methods that continue to perform well even if the models are wrong or underspecified. The first two of these scenarios involve regression problems and investigate AIC (Akaike's Information Criterion) and Bayesian statistics. The third scenario has the famous Monty Hall problem as a special case, and considers the question how we can update our belief about an unknown outcome given new evidence when the precise relation between outcome and evidence is unknown.UBL - phd migration 201

    Prediction Markets for Machine Learning: Equilibrium Behaviour through Sequential Markets

    No full text
    Prediction markets which trade on contracts representing unknown future outcomes are designed specifically to aggregate expert predictions via the market price. While there are some existing machine learning interpretations for the market price and connections to Bayesian updating under the equilibrium analysis of such markets, there is less of an understanding of what the instantaneous price in sequentially traded markets means. In this thesis I show that the prices generated in sequentially traded prediction markets are stochastic approximations to the price given by an equilibrium analysis. This is done by showing that the equilibrium price is a solution to a stochastic optimisation problem which is solved by stochastic mirror descent (SMD) by a class of sequential pricing mechanisms. This connection leads to proposing a scheme called “mini-trading” which introduces a parameter related to the learning rate in SMD. I prove several properties of this scheme and show that it can improve the stability of prices in sequentially traded prediction markets. Also I analyse two popular trading models (namely the Maximum Expected Utility model and the Risk-measure model) in respect to an assumption on the class of traders I required to interpret sequential markets as SMD. I derive a sufficient condition for when the Maximum Expected Utility traders satisfy this assumption, but show that risk-measure based traders naturally satisfy this assumption for the type of markets I consider. Then I show that the “regret” of mini-trading markets (with respect to equilibrium markets) depend on the mini-trade parameter. Finally I attempt to compare the wealth updates of traders in sequential markets to the wealth updates in equilibrium markets, since this would help to extend the interpretation of equilibrium markets as performing Bayesian updates to sequential markets. For this I present preliminary results

    Proceedings of the Fifth Workshop on Information Theoretic Methods in Science and Engineering (WITMSE-2012)

    Get PDF
    Peer reviewe

    Better predictions when models are wrong or underspecified

    Get PDF
    Many statistical methods rely on models of reality in order to learn from data and to make predictions about future data. By necessity, these models usually do not match reality exactly, but are either wrong (none of the hypotheses in the model provides an accurate description of reality) or underspecified (the hypotheses in the model describe only part of the data). In this thesis, we discuss three scenarios involving models that are wrong or underspecified. In each case, we find that standard statistical methods may fail, sometimes dramatically, and present different methods that continue to perform well even if the models are wrong or underspecified. The first two of these scenarios involve regression problems and investigate AIC (Akaike's Information Criterion) and Bayesian statistics. The third scenario has the famous Monty Hall problem as a special case, and considers the question how we can update our belief about an unknown outcome given new evidence when the precise relation between outcome and evidence is unknown.UBL - phd migration 201

    Bayesian Learning: Challenges, Limitations and Pragmatics

    Get PDF
    This dissertation is about Bayesian learning from data. How can humans and computers learn from data? This question is at the core of both statistics and — as its name already suggests — machine learning. Bayesian methods are widely used in these fields, yet they have certain limitations and problems of interpretation. In two chapters of this dissertation, we examine such a limitation, and overcome it by extending the standard Bayesian framework. In two other chapters, we discuss how different philosophical interpretations of Bayesianism affect mathematical definitions and theorems about Bayesian methods and their use in practise. While some researchers see the Bayesian framework as normative (all statistics should be based on Bayesian methods), in the two remaining chapters, we apply Bayesian methods in a pragmatic way: merely as tool for interesting learning problems (that could also have been addressed by non-Bayesian methods).The author’s PhD position at the Mathematical Institute was supported by the Leiden IBM-SPSS Fund. The research was performed at the Centrum Wiskunde & Informatica (CWI). Part of the work was done while the author was visiting Inria Lille, partly funded by Leids Universiteits Fonds / Drs. J.R.D. Kuikenga Fonds voor Mathematici travel grant number W19204-1-35.Number theory, Algebra and Geometr
    corecore