11 research outputs found

    Bayesian Learning: Challenges, Limitations and Pragmatics

    Get PDF
    This dissertation is about Bayesian learning from data. How can humans and computers learn from data? This question is at the core of both statistics and — as its name already suggests — machine learning. Bayesian methods are widely used in these fields, yet they have certain limitations and problems of interpretation. In two chapters of this dissertation, we examine such a limitation, and overcome it by extending the standard Bayesian framework. In two other chapters, we discuss how different philosophical interpretations of Bayesianism affect mathematical definitions and theorems about Bayesian methods and their use in practise. While some researchers see the Bayesian framework as normative (all statistics should be based on Bayesian methods), in the two remaining chapters, we apply Bayesian methods in a pragmatic way: merely as tool for interesting learning problems (that could also have been addressed by non-Bayesian methods).The author’s PhD position at the Mathematical Institute was supported by the Leiden IBM-SPSS Fund. The research was performed at the Centrum Wiskunde & Informatica (CWI). Part of the work was done while the author was visiting Inria Lille, partly funded by Leids Universiteits Fonds / Drs. J.R.D. Kuikenga Fonds voor Mathematici travel grant number W19204-1-35.Number theory, Algebra and Geometr

    On the truth-convergence of open-minded Bayesianism

    Get PDF
    Wenmackers and Romeijn [38] formalize ideas going back to Shimony [33] and Putnam [28] into an open-minded Bayesian inductive logic, that can dynamically incorporate statistical hypotheses proposed in the course of the learning process. In this paper, we show that Wenmackers and Romeijn’s proposal does not preserve the classical Bayesian consistency guarantee of merger with the true hypothesis. We diagnose the problem, and offer a forward-looking open-minded Bayesians that does preserve a version of this guarantee

    Why optional stopping can be a problem for Bayesians

    Get PDF
    Recently, optional stopping has been a subject of debate in the Bayesian psychology community. Rouder (Psychonomic Bulletin & Review21(2), 301–308, 2014) argues that optional stopping is no problem for Bayesians, and even recommends the use of optional stopping in practice, as do (Wagenmakers, Wetzels, Borsboom, van der Maas & Kievit, Perspectives on Psychological Science7, 627–633, 2012). This article addresses the question of whether optional stopping is problematic for Bayesian methods, and specifies under which circumstances and in which sense it is and is not. By slightly varying and extending Rouder’s (Psychonomic Bulletin & Review21(2), 301–308, 2014) experiments, we illustrate that, as soon as the parameters of interest are equipped with default or pragmatic priors—which means, in most practical applications of Bayes factor hypothesis testing—resilience to optional stopping can break down. We distinguish between three types of default priors, each having their own specific issues with optional stopping, ranging from no-problem-at-all (type 0 priors) to quite severe (type II priors)

    Safe Testing

    Get PDF
    We present a new theory of hypothesis testing. The main concept is the S-value, a notion of evidence which, unlike p-values, allows for effortlessly combining evidence from several tests, even in the common scenario where the decision to perform a new test depends on the previous test outcome: safe tests based on S-values generally preserve Typ

    God, the beautiful and mathematics: A response

    Get PDF
    Volker Kessler (‘God becomes beautiful … in mathematics’ - HTS 2018) argues two points to Rudolf Bohren’s list of four areas where (1) God becomes beautiful should be extended with a fifth one: mathematics and (2) mathematics can be argued as a place where God becomes beautiful. In this response, we would like to argue that (1) the extension of Bohren’s list that Kessler argues in favour of is superfluous and (2) that Kessler makes a number of questionable assumptions about (the philosophy of) mathematics. By arguing against Kessler, we intend to make an interdisciplinary contribution to the discussion about the relationship between mathematics and theology by pushing the debate into direction of a more careful consideration of mathematics as an area in which God’s beauty may become apparent. Contribution: Contributing to the interdisciplinary exploration of theology in HTS Teologiese Studies/Theological Studies, this article further develops the consideration of the fundamental theological topic of God, the beautiful and mathematics as it was proposed in this journal by Volker Kessler, by discussing it from a systematic theological and mathematical perspective

    Optional stopping with Bayes factors: A categorization and extension of folklore results, with an application to invariant situations

    Get PDF
    It is often claimed that Bayesian methods, in particular Bayes factor methods for hypothesis testing, can deal with optional stopping. We first give an overview, using elementary probability theory, of three different mathematical meanings that various authors give to this claim: (1) stopping rule independence, (2) posterior calibration and (3) (semi-) frequentist robustness to optional stopping. We then prove theorems to the effect that these claims do indeed hold in a general measure-theoretic setting. For claims of type (2) and (3), such results are new. By allowing for non-integrable measures based on improper priors, we obtain particularly strong results for the practically important case of models with nuisance parameters satisfying a group invariance (such as location or scale). We also discus

    Safe-Bayesian Generalized Linear Regression

    Get PDF
    We study generalized Bayesian inference under misspecification, i.e. when the model is ‘wrong but useful’. Generalized Bayes equips the likelihood with a learning rate η. We show that for generalized linear models (GLMs), η-generalized Bayes concentrates around the best approximation of the truth within the model for specific ηeq1, even under severely misspecified noise, as long as the tails of the true distribution are exponential. We derive MCMC samplers for generalized Bayesian lasso and logistic regression and give examples of both simulated and real-world data in which generalized Bayes substantially outperforms standard Bayes
    corecore