79 research outputs found

    Representation insensitivity in immediate prediction under exchangeability

    Get PDF
    We consider immediate predictive inference, where a subject, using a number of observations of a finite number of exchangeable random variables, is asked to coherently model his beliefs about the next observation, in terms of a predictive lower prevision. We study when such predictive lower previsions are representation insensitive, meaning that they are essentially independent of the choice of the (finite) set of possible values for the random variables. We establish that such representation insensitive predictive models have very interesting properties, and show that among such models, the ones produced by the Imprecise Dirichlet-Multinomial Model are quite special in a number of ways. In the Conclusion, we discuss the open question as to how unique the predictive lower previsions of the Imprecise Dirichlet-Multinomial Model are in being representation insensitive

    Learning from samples using coherent lower previsions

    Get PDF
    Het hoofdonderwerp van dit werk is het afleiden, voorstellen en bestuderen van voorspellende en parametrische gevolgtrekkingsmodellen die gebaseerd zijn op de theorie van coherente onderprevisies. Een belangrijk nevenonderwerp is het vinden en bespreken van extreme onderwaarschijnlijkheden. In het hoofdstuk ‘Modeling uncertainty’ geef ik een inleidend overzicht van de theorie van coherente onderprevisies ─ ook wel theorie van imprecieze waarschijnlijkheden genoemd ─ en de ideeĂ«n waarop ze gestoeld is. Deze theorie stelt ons in staat onzekerheid expressiever ─ en voorzichtiger ─ te beschrijven. Dit overzicht is origineel in de zin dat ze meer dan andere inleidingen vertrekt van de intuitieve theorie van coherente verzamelingen van begeerlijke gokken. Ik toon in het hoofdstuk ‘Extreme lower probabilities’ hoe we de meest extreme vormen van onzekerheid kunnen vinden die gemodelleerd kunnen worden met onderwaarschijnlijkheden. Elke andere onzekerheidstoestand beschrijfbaar met onderwaarschijnlijkheden kan geformuleerd worden in termen van deze extreme modellen. Het belang van de door mij bekomen en uitgebreid besproken resultaten in dit domein is voorlopig voornamelijk theoretisch. Het hoofdstuk ‘Inference models’ behandelt leren uit monsters komende uit een eindige, categorische verzameling. De belangrijkste basisveronderstelling die ik maak is dat het bemonsteringsproces omwisselbaar is, waarvoor ik een nieuwe definitie geef in termen van begeerlijke gokken. Mijn onderzoek naar de gevolgen van deze veronderstelling leidt ons naar enkele belangrijke representatiestellingen: onzekerheid over (on)eindige rijen monsters kan gemodelleerd worden in termen van categorie-aantallen (-frequenties). Ik bouw hier op voort om voor twee populaire gevolgtrekkingsmodellen voor categorische data ─ het voorspellende imprecies Dirichlet-multinomiaalmodel en het parametrische imprecies Dirichletmodel ─ een verhelderende afleiding te geven, louter vertrekkende van enkele grondbeginselen; deze modellen pas ik toe op speltheorie en het leren van Markov-ketens. In het laatste hoofdstuk, ‘Inference models for exponential families’, verbreed ik de blik tot niet-categorische exponentiĂ«le-familie-bemonsteringsmodellen; voorbeelden zijn normale bemonstering en Poisson-bemonstering. Eerst onderwerp ik de exponentiĂ«le families en de aanverwante toegevoegde parametrische en voorspellende previsies aan een grondig onderzoek. Deze aanverwante previsies worden gebruikt in de klassieke Bayesiaanse gevolgtrekkingsmodellen gebaseerd op toegevoegd updaten. Ze dienen als grondslag voor de nieuwe, door mij voorgestelde imprecieze-waarschijnlijkheidsgevolgtrekkingsmodellen. In vergelijking met de klassieke Bayesiaanse aanpak, laat de mijne toe om voorzichtiger te zijn bij de beschrijving van onze kennis over het bemonsteringsmodel; deze voorzichtigheid wordt weerspiegeld door het op deze modellen gebaseerd gedrag (getrokken besluiten, gemaakte voorspellingen, genomen beslissingen). Ik toon ten slotte hoe de voorgestelde gevolgtrekkingsmodellen gebruikt kunnen worden voor classificatie door de naĂŻeve credale classificator.This thesis's main subject is deriving, proposing, and studying predictive and parametric inference models that are based on the theory of coherent lower previsions. One important side subject also appears: obtaining and discussing extreme lower probabilities. In the chapter ‘Modeling uncertainty’, I give an introductory overview of the theory of coherent lower previsions ─ also called the theory of imprecise probabilities ─ and its underlying ideas. This theory allows us to give a more expressive ─ and a more cautious ─ description of uncertainty. This overview is original in the sense that ─ more than other introductions ─ it is based on the intuitive theory of coherent sets of desirable gambles. I show in the chapter ‘Extreme lower probabilities’ how to obtain the most extreme forms of uncertainty that can be modeled using lower probabilities. Every other state of uncertainty describable by lower probabilities can be formulated in terms of these extreme ones. The importance of the results in this area obtained and extensively discussed by me is currently mostly theoretical. The chapter ‘Inference models’ treats learning from samples from a finite, categorical space. My most basic assumption about the sampling process is that it is exchangeable, for which I give a novel definition in terms of desirable gambles. My investigation of the consequences of this assumption leads us to some important representation theorems: uncertainty about (in)finite sample sequences can be modeled entirely in terms of category counts (frequencies). I build on this to give an elucidating derivation from first principles for two popular inference models for categorical data ─ the predictive imprecise Dirichlet-multinomial model and the parametric imprecise Dirichlet model; I apply these models to game theory and learning Markov chains. In the last chapter, ‘Inference models for exponential families’, I enlarge the scope to exponential family sampling models; examples are normal sampling and Poisson sampling. I first thoroughly investigate exponential families and the related conjugate parametric and predictive previsions used in classical Bayesian inference models based on conjugate updating. These previsions serve as a basis for the new imprecise-probabilistic inference models I propose. Compared to the classical Bayesian approach, mine allows to be much more cautious when trying to express what we know about the sampling model; this caution is reflected in behavior (conclusions drawn, predictions made, decisions made) based on these models. Lastly, I show how the proposed inference models can be used for classification with the naive credal classifier

    Blurring Out Cosmic Puzzles

    Get PDF
    The Doomsday argument and anthropic reasoning are two puzzling examples of probabilistic confirmation. In both cases, a lack of knowledge apparently yields surprising conclusions. Since they are formulated within a Bayesian framework, they constitute a challenge to Bayesianism. Several attempts, some successful, have been made to avoid these conclusions, but some versions of these arguments cannot be dissolved within the framework of orthodox Bayesianism. I show that adopting an imprecise framework of probabilistic reasoning allows for a more adequate representation of ignorance in Bayesian reasoning and explains away these puzzles.Comment: 15 pages, 1 figure. To appear in Philosophy of Science (PSA 2014

    Decisions from Experience and from Description: Beliefs and Probability Weighting

    Get PDF
    Decisions from description typically concern risk in the literature on decision making. It is identified as a case where outcome probabilities are objectively known. Decisions from experience, on the other hand, represent a case of ambiguity. Here, the outcome probabilities are not known objectively but they are subjectively inferred based on observations. As in many real life situations, probabilistic inference and information search are integral parts of decisions from experience. This dissertation explores behavioral differences between decisions from experience and from description by focusing on the role of (1) probability weighting, and (2) subjective beliefs. _Chapter 2_ investigates the impact of experience on probability weighting. _Chapter 3_ points out the role of prior beliefs in accounting for decisions from experience. _Chapter 4_ introduces a non-Bayesian model of updating which accommodates common biases in probabilistic inference. _Chapter 5_ reports results of a laboratory experiment testing Prelec’s (1998) theory of probability weightin

    Essays on Decision Making: Intertemporal Choice and Uncertainty

    Get PDF
    Being labeled as a social science, much of economics is about understanding human behavior; be it in the face of uncertainty or delayed payoffs through time or strategic situations such as auctions, bargaining, and so on. This thesis will be concerned with the first two, namely uncertainty and time preferences. The main focus of this thesis is what we can summarize with two broad titles: "irrationalities" in human behavior and an alternative perspective on 'rational behavior". My claim requires a clarification of what is meant by rational or irrational behavior. In one of the early discussions of this topic, Richter (1966) defined a rational consumer as someone for whom there exists a total, reflexive, and transitive binary relation on the set of commodities so that his choice data consists of maximal elements of this binary relation. In this respect, Richter (1966) only imposed minimal consistency conditions on behavior for it to be labeled as rational. Although his setting does not involve any uncertainty or time dimension, analogues of these conditions exist for the models we consider here as well. So one can extend the rationality notion of Richter (1966) to our models too. Yet the essence of his approach to rationality is different than the one we take up in this thesis. This minimalistic approach of Richter would leave little space for discussions on rational behavior because much behavior would be rational except for a few cleverly constructed counterexamples. Instead we will consider more widely accepted norms of rationality and analyze them in the framework of uncertainty and time preferences. The widely accepted norms of rationality mentioned above are understood to be axioms that lead to decision rules describing people's behavior. In the case of decision making under risk and uncertainty the most commonly used decision model is expected utility, and in the case of dynamic decision making, it is the constant discounted utility model. Although there are models that combine both to explain decision making in a dynamic stochastic settings, in this thesis we study them in isolation to assess the nature of the models in more detail

    Bounded rationality in individual decision making

    Get PDF

    Ecotoxicological Risk Assessment: Developments in PNEC Estimation

    Get PDF
    Ecotoxicological risk assessment must be undertaken before a chemical can be deemed safe for application. The assessment is based on three components: hazard assess- ment, exposure assessment and risk characterisation. The latter is a combination of the former two. One standard approach is based on the deterministic comparison of exposure concentration estimates to the concentration of the toxicant below which adverse effects are unlikely to occur to the potentially exposed ecological assemblage. This concentration is known as the ‘predicted no effect concentration’ (PNEC). At the level of hazard assessment we are concerned with, there is a requirement that procedures be straightforward and efficient, as well as being transparent. The PNEC is in general currently determined using either a fixed assessment factor applied to a summary statistic of observed laboratory derived toxicity data, or as a percentile of a distribution over the ecological community sensitivity. Often it is the situation that a hazard assessment will be based on substantially small samples of data. In this thesis we evaluate proposals for determining a PNEC according to reg- ulatory guidance and scientific literature. In particular, we explore these methods under the context of alternative probabilistic models. We also focus on the deter- mination of conservative probabilistic estimators, which may be appropriate for this level of risk assessment. Additionally, we also discuss the detection of species non- exchangeability, a concept which is recognised by scientists and risk assessors, yet typically discounted in practice. A proposal on incorporating knowledge of a non-exchangeable species for probabilistic estimators is discussed and evaluated. The final topic of research examines a generalised deterministic estimator proposed in a recent European Food Safety Agency report. In particular, we analyse the ro- bustness and analytical properties of some cases of this estimator which (at least) maintains the expected level of protection currently attributed. Proposals made within this thesis, many of which extend upon what is currently scientifically accepted, satisfy the requirements of being tractably straightforward to apply and are scientifically defensible. This will appeal to end users and increase the chances of gaining regulatory acceptance. All developments are fully illustrated with real-life examples

    Best practices for the provision of prior information for Bayesian stock assessment

    Get PDF
    This manual represents a review of the potential sources and methods to be applied when providing prior information to Bayesian stock assessments and marine risk analysis. The manual is compiled as a product of the EC Framework 7 ECOKNOWS project (www.ecoknows.eu). The manual begins by introducing the basic concepts of Bayesian inference and the role of prior information in the inference. Bayesian analysis is a mathematical formalization of a sequential learning process in a probabilistic rationale. Prior information (also called ”prior knowledge”, ”prior belief”, or simply a ”prior”) refers to any existing relevant knowledge available before the analysis of the newest observations (data) and the information included in them. Prior information is input to a Bayesian statistical analysis in the form of a probability distribution (a prior distribution) that summarizes beliefs about the parameter concerned in terms of relative support for different values. Apart from specifying probable parameter values, prior information also defines how the data are related to the phenomenon being studied, i.e. the model structure. Prior information should reflect the different degrees of knowledge about different parameters and the interrelationships among them. Different sources of prior information are described as well as the particularities important for their successful utilization. The sources of prior information are classified into four main categories: (i) primary data, (ii) literature, (iii) online databases, and (iv) experts. This categorization is somewhat synthetic, but is useful for structuring the process of deriving a prior and for acknowledging different aspects of it. A hierarchy is proposed in which sources of prior information are ranked according to their proximity to the primary observations, so that use of raw data is preferred where possible. This hierarchy is reflected in the types of methods that might be suitable – for example, hierarchical analysis and meta-analysis approaches are powerful, but typically require larger numbers of observations than other methods. In establishing an informative prior distribution for a variable or parameter from ancillary raw data, several steps should be followed. These include the choice of the frequency distribution of observations which also determines the shape of prior distribution, the choice of the way in which a dataset is used to construct a prior, and the consideration related to whether one or several datasets are used. Explicitly modelling correlations between parameters in a hierarchical model can allow more effective use of the available information or more knowledge with the same data. Checking the literature is advised as the next approach. Stock assessment would gain much from the inclusion of prior information derived from the literature and from literature compilers such as FishBase (www.fishbase.org), especially in data-limited situations. The reader is guided through the process of obtaining priors for length–weight, growth, and mortality parameters from FishBase. Expert opinion lends itself to data-limited situations and can be used even in cases where observations are not available. Several expert elicitation tools are introduced for guiding experts through the process of expressing their beliefs and for extracting numerical priors about variables of interest, such as stock–recruitment dynamics, natural mortality, maturation, and the selectivity of fishing gears. Elicitation of parameter values is not the only task where experts play an important role; they also can describe the process to be modelled as a whole. Information sources and methods are not mutually exclusive, so some combination may be used in deriving a prior distribution. Whichever source(s) and method(s) are chosen, it is important to remember that the same data should not be used twice. If the 2 | ICES Cooperative Research Report No. 328 plan is to use the data in the analysis for which the prior distribution is needed, then the same data cannot be used in formulating the prior. The techniques studied and proposed in this manual can be further elaborated and fine-tuned. New developments in technology can potentially be explored to find novel ways of forming prior distributions from different sources of information. Future research efforts should also be targeted at the philosophy and practices of model building based on existing prior information. Stock assessments that explicitly account for model uncertainty are still rare, and improving the methodology in this direction is an important avenue for future research. More research is also needed to make Bayesian analysis of non-parametric models more accessible in practice. Since Bayesian stock assessment models (like all other assessment models) are made from existing knowledge held by human beings, prior distributions for parameters and model structures may play a key role in the processes of collectively building and reviewing those models with stakeholders. Research on the theory and practice of these processes will be needed in the future
    • 

    corecore