10,698 research outputs found

    New Foundations for Imprecise Bayesianism.

    Full text link
    My dissertation examines two kinds of statistical tools for taking prior information into account, and investigates what reasons we have for using one or the other in different sorts of inference and decision problems. Chapter 1 describes a new objective Bayesian method for constructing `precise priors'. Precise prior probability distributions are statistical tools for taking account of your `prior evidence' in an inference or decision problem. `Prior evidence' is the wooly hodgepodge of information that you come to the table with. `Experimental evidence' is the new data that you gather to facilitate inference and decision-making. I leverage this method to provide the seeds of a solution to `the problem of the priors', the problem of providing a compelling epistemic rationale for using some `objective' method or other for constructing priors. You ought to use the proposed method, at least in certain contexts, I argue, because it minimizes your need for epistemic luck in securing accurate `posterior' (post-experiment) beliefs. Chapter 2 addresses a pressing concern about precise priors. Precise priors, some Bayesians say, fail to adequately summarize certain kinds of evidence. As a class, precise priors capture improper responses to unspecific and equivocal evidence. This motivates the introduction of imprecise priors. We need imprecise priors, or sets of distributions to summarize such evidence. I argue that, despite appearances to the contrary, precise priors are, in fact, flexible enough to capture proper responses to unspecific and equivocal evidence. The proper motivation for introducing imprecise priors, then, is not that they are required to summarize such evidence. We ought to search for new epistemic reasons to introduce imprecise priors. Chapter 3 explores two new kinds of reasons for employing imprecise priors. We ought to adopt imprecise priors in certain contexts because they put us in an unequivocally better position to secure epistemically valuable posterior beliefs than precise priors do. We ought to adopt imprecise priors in various other contexts because they minimize our need for epistemic luck in securing such posteriors. This points the way toward a new, potentially promising epistemic foundation for imprecise Bayesianism.PHDPhilosophyUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/99960/1/jpkonek_1.pd

    ISIPTA'07: Proceedings of the Fifth International Symposium on Imprecise Probability: Theories and Applications

    Get PDF
    B

    A statistical inference method for the stochastic reachability analysis.

    Get PDF
    The main contribution of this paper is the characterization of reachability problem associated to stochastic hybrid systems in terms of imprecise probabilities. This provides the connection between reachability problem and Bayesian statistics. Using generalised Bayesian statistical inference, a new concept of conditional reach set probabilities is defined. Then possible algorithms to compute the reach set probabilities are derived

    A nonparametric predictive alternative to the Imprecise Dirichlet Model: the case of a known number of categories

    Get PDF
    Nonparametric Predictive Inference (NPI) is a general methodology to learn from data in the absence of prior knowledge and without adding unjustified assumptions. This paper develops NPI for multinomial data where the total number of possible categories for the data is known. We present the general upper and lower probabilities and several of their properties. We also comment on differences between this NPI approach and corresponding inferences based on Walley's Imprecise Dirichlet Model

    On Sharp Identification Regions for Regression Under Interval Data

    Get PDF
    The reliable analysis of interval data (coarsened data) is one of the most promising applications of imprecise probabilities in statistics. If one refrains from making untestable, and often materially unjustified, strong assumptions on the coarsening process, then the empirical distribution of the data is imprecise, and statistical models are, in Manskiā€™s terms, partially identified. We first elaborate some subtle differences between two natural ways of handling interval data in the dependent variable of regression models, distinguishing between two different types of identification regions, called Sharp Marrow Region (SMR) and Sharp Collection Region (SCR) here. Focusing on the case of linear regression analysis, we then derive some fundamental geometrical properties of SMR and SCR, allowing a comparison of the regions and providing some guidelines for their canonical construction. Relying on the algebraic framework of adjunctions of two mappings between partially ordered sets, we characterize SMR as a right adjoint and as the monotone kernel of a criterion function based mapping, while SCR is indeed interpretable as the corresponding monotone hull. Finally we sketch some ideas on a compromise between SMR and SCR based on a set-domained loss function. This paper is an extended version of a shorter paper with the same title, that is conditionally accepted for publication in the Proceedings of the Eighth International Symposium on Imprecise Probability: Theories and Applications. In the present paper we added proofs and the seventh chapter with a small Monte-Carlo-Illustration, that would have made the original paper too long
    • ā€¦
    corecore