113 research outputs found

    Learning from samples using coherent lower previsions

    Get PDF
    Het hoofdonderwerp van dit werk is het afleiden, voorstellen en bestuderen van voorspellende en parametrische gevolgtrekkingsmodellen die gebaseerd zijn op de theorie van coherente onderprevisies. Een belangrijk nevenonderwerp is het vinden en bespreken van extreme onderwaarschijnlijkheden. In het hoofdstuk ‘Modeling uncertainty’ geef ik een inleidend overzicht van de theorie van coherente onderprevisies ─ ook wel theorie van imprecieze waarschijnlijkheden genoemd ─ en de ideeën waarop ze gestoeld is. Deze theorie stelt ons in staat onzekerheid expressiever ─ en voorzichtiger ─ te beschrijven. Dit overzicht is origineel in de zin dat ze meer dan andere inleidingen vertrekt van de intuitieve theorie van coherente verzamelingen van begeerlijke gokken. Ik toon in het hoofdstuk ‘Extreme lower probabilities’ hoe we de meest extreme vormen van onzekerheid kunnen vinden die gemodelleerd kunnen worden met onderwaarschijnlijkheden. Elke andere onzekerheidstoestand beschrijfbaar met onderwaarschijnlijkheden kan geformuleerd worden in termen van deze extreme modellen. Het belang van de door mij bekomen en uitgebreid besproken resultaten in dit domein is voorlopig voornamelijk theoretisch. Het hoofdstuk ‘Inference models’ behandelt leren uit monsters komende uit een eindige, categorische verzameling. De belangrijkste basisveronderstelling die ik maak is dat het bemonsteringsproces omwisselbaar is, waarvoor ik een nieuwe definitie geef in termen van begeerlijke gokken. Mijn onderzoek naar de gevolgen van deze veronderstelling leidt ons naar enkele belangrijke representatiestellingen: onzekerheid over (on)eindige rijen monsters kan gemodelleerd worden in termen van categorie-aantallen (-frequenties). Ik bouw hier op voort om voor twee populaire gevolgtrekkingsmodellen voor categorische data ─ het voorspellende imprecies Dirichlet-multinomiaalmodel en het parametrische imprecies Dirichletmodel ─ een verhelderende afleiding te geven, louter vertrekkende van enkele grondbeginselen; deze modellen pas ik toe op speltheorie en het leren van Markov-ketens. In het laatste hoofdstuk, ‘Inference models for exponential families’, verbreed ik de blik tot niet-categorische exponentiële-familie-bemonsteringsmodellen; voorbeelden zijn normale bemonstering en Poisson-bemonstering. Eerst onderwerp ik de exponentiële families en de aanverwante toegevoegde parametrische en voorspellende previsies aan een grondig onderzoek. Deze aanverwante previsies worden gebruikt in de klassieke Bayesiaanse gevolgtrekkingsmodellen gebaseerd op toegevoegd updaten. Ze dienen als grondslag voor de nieuwe, door mij voorgestelde imprecieze-waarschijnlijkheidsgevolgtrekkingsmodellen. In vergelijking met de klassieke Bayesiaanse aanpak, laat de mijne toe om voorzichtiger te zijn bij de beschrijving van onze kennis over het bemonsteringsmodel; deze voorzichtigheid wordt weerspiegeld door het op deze modellen gebaseerd gedrag (getrokken besluiten, gemaakte voorspellingen, genomen beslissingen). Ik toon ten slotte hoe de voorgestelde gevolgtrekkingsmodellen gebruikt kunnen worden voor classificatie door de naïeve credale classificator.This thesis's main subject is deriving, proposing, and studying predictive and parametric inference models that are based on the theory of coherent lower previsions. One important side subject also appears: obtaining and discussing extreme lower probabilities. In the chapter ‘Modeling uncertainty’, I give an introductory overview of the theory of coherent lower previsions ─ also called the theory of imprecise probabilities ─ and its underlying ideas. This theory allows us to give a more expressive ─ and a more cautious ─ description of uncertainty. This overview is original in the sense that ─ more than other introductions ─ it is based on the intuitive theory of coherent sets of desirable gambles. I show in the chapter ‘Extreme lower probabilities’ how to obtain the most extreme forms of uncertainty that can be modeled using lower probabilities. Every other state of uncertainty describable by lower probabilities can be formulated in terms of these extreme ones. The importance of the results in this area obtained and extensively discussed by me is currently mostly theoretical. The chapter ‘Inference models’ treats learning from samples from a finite, categorical space. My most basic assumption about the sampling process is that it is exchangeable, for which I give a novel definition in terms of desirable gambles. My investigation of the consequences of this assumption leads us to some important representation theorems: uncertainty about (in)finite sample sequences can be modeled entirely in terms of category counts (frequencies). I build on this to give an elucidating derivation from first principles for two popular inference models for categorical data ─ the predictive imprecise Dirichlet-multinomial model and the parametric imprecise Dirichlet model; I apply these models to game theory and learning Markov chains. In the last chapter, ‘Inference models for exponential families’, I enlarge the scope to exponential family sampling models; examples are normal sampling and Poisson sampling. I first thoroughly investigate exponential families and the related conjugate parametric and predictive previsions used in classical Bayesian inference models based on conjugate updating. These previsions serve as a basis for the new imprecise-probabilistic inference models I propose. Compared to the classical Bayesian approach, mine allows to be much more cautious when trying to express what we know about the sampling model; this caution is reflected in behavior (conclusions drawn, predictions made, decisions made) based on these models. Lastly, I show how the proposed inference models can be used for classification with the naive credal classifier

    Quantifying Degrees of E-admissibility in Decision Making with Imprecise Probabilities

    Get PDF
    This paper is concerned with decision making using imprecise probabilities. In the first part, we introduce a new decision criterion that allows for explicitly modeling how far decisions that are optimal in terms of Walley’s maximality are accepted to deviate from being optimal in the sense of Levi’s E-admissibility. For this criterion, we also provide an efficient and simple algorithm based on linear programming theory. In the second part of the paper, we propose two new measures for quantifying the extent of E-admissibility of an E-admissible act, i.e. the size of the set of measures for which the corresponding act maximizes expected utility. The first measure is the maximal diameter of this set, while the second one relates to the maximal barycentric cube that can be inscribed into it. Also here, for both measures, we give linear programming algorithms capable to deal with them. Finally, we discuss some ideas in the context of ordinal decision theory. The paper concludes with a stylized application examples illustrating all introduced concepts

    Quantifying Degrees of E-admissibility in Decicion Making with Imprecise Probabilities

    Get PDF
    This paper is concerned with decision making using imprecise probabilities. In the first part, we introduce a new decision criterion that allows for explicitly modeling how far decisions that are optimal in terms of Walley’s maximality are accepted to deviate from being optimal in the sense of Levi’s E-admissibility. For this criterion, we also provide an efficient and simple algorithm based on linear programming theory. In the second part of the paper, we propose two new measures for quantifying the extent of E-admissibility of an E-admissible act, i.e. the size of the set of measures for which the corresponding act maximizes expected utility. The first measure is the maximal diameter of this set, while the second one relates to the maximal barycentric cube that can be inscribed into it. Also here, for both measures, we give linear programming algorithms capable to deal with them. Finally, we discuss some ideas in the context of ordinal decision theory. The paper concludes with a stylized application examples illustrating all introduced concepts

    Imprecise probability in epistemology

    Get PDF
    There is a growing interest in the foundations as well as the application of imprecise probability in contemporary epistemology. This dissertation is concerned with the application. In particular, the research presented concerns ways in which imprecise probability, i.e. sets of probability measures, may helpfully address certain philosophical problems pertaining to rational belief. The issues I consider are disagreement among epistemic peers, complete ignorance, and inductive reasoning with imprecise priors. For each of these topics, it is assumed that belief can be modeled with imprecise probability, and thus there is a non-classical solution to be given to each problem. I argue that this is the case for peer disagreement and complete ignorance. However, I discovered that the approach has its shortcomings, too, specifically in regard to inductive reasoning with imprecise priors. Nevertheless, the dissertation ultimately illustrates that imprecise probability as a model of rational belief has a lot of promise, but one should be aware of its limitations also

    Some contributions to decision making in complex information settings with imprecise probabilities and incomplete preferences

    Get PDF
    • …
    corecore