65 research outputs found

    Bell's Inequality Violations: Relation with de Finetti's Coherence Principle and Inferential Analysis of Experimental Data

    Get PDF
    It is often believed that de Finetti's coherence principle naturally leads, in the nite case, to the Kolmogorov's probability theory of random phenomena, which then implies Bell's inequality. Thus, not only a violation of Bell's inequality looks paradoxical in the Kolmogorovian framework, but it should violate also de Finetti's coherence principle. Firstly, we show that this is not the case: the typical theoretical violations of Bell's inequality in quantum physics are in agreement with de Finetti's coherence principle. Secondly, we look for statistical evidence of such violations: we consider the experimental data of measurements of polarization of photons, performed to verify empirically violations of Bell's inequality, and, on the basis of the estimated violation, we test the null hypothesis of Kolmogorovianity for the observed phenomenon. By standard inferential techniques we compute the p-value for the test and get a clear strong conclusion against the Kolmogorovian hypothesis

    Towards an epistemic theory of probability.

    Get PDF
    The main concern of this thesis is to develop an epistemic conception of probability. In chapter one we look at Ramsey's work. In addition to his claim that the axioms of probability ace laws of consistency for partial beliefs, we focus attention on his view that the reasonableness of our probability statements does not consist merely in such coherence, but is to be assessed through the vindication of the habits which give rise to them. In chapter two we examine de Finetti's account, and compare it with Ramsey's. One significant point of divergence is de Finetti's claim that coherence is the only valid form of appraisal for probability statements. His arguments for this position depend heavily on the implementation of a Bayesian model for belief change; we argue that such an approach fails to give a satisfactory account of the relation between probabilities and objective facts. In chapter three we stake out the ground for oar own positive proposals - for an account which is non-objective in so far as it does not require the postulation of probabilistic facts, but non-subjective in the sense that probability statements are open to objective forms of appraisal. we suggest that a certain class of probability statements are best interpreted as recommendations of partial belief; these being measurable by the betting quotients that one judges to be fair. Moreover, we argue that these probability statements are open to three main forms of appraisal (each quantifiable through the use of proper scoring rules), namely: (i) Coherence (ii) Calibration (iii) Refinement. The latter two forms of appraisal are applicable both in an ex ante sense (relative to the information known by the forecaster) and an ex post one (relative to the results of the events forecast). In chapters four and five we consider certain problems which confront theories of partial belief; in particular, (1) difficulties surrounding the justification of the rule to maximise one's information, and (2) problems with the ascription of probabilities to mathematical propositions. Both of these issues seem resolvable; the first through the principle of maximising expected utility (SEU), and the second either by amending the axioms of probability, or by making use of the notion that probabilities are appraisable via scoring rules. There do remain, however, various difficulties with SEU, in particular with respect to its application in real-life situations. These are discussed, but no final conclusion reached, except that an epistemic theory such as ours is not undermined by the inapplicability of SEU in certain situations

    Objective and Subjective Rationality in a Multiple Prior Model

    Get PDF
    A decision maker is characterized by two binary relations. The first reflects decisions that are rational in an “objective” sense: the decision maker can convince others that she is right in making them. The second relation models decisions that are rational in a “subjective” sense: the decision maker cannot be convinced that she is wrong in making them. We impose axioms on these relations that allow a joint representation by a single set of prior probabilities. It is “objectively rational” to choose f in the presence of g if and only if the expected utility of f is at least as high as that of g given each and every prior in the set. It is “subjectively rational” to choose f rather than g if and only if the minimal expected utility of f (relative to all priors in the set) is at least as high as that of g.Rationality, Multiple Priors.

    The Fundamental Theorem of Prevision

    Get PDF
    1 online resource (PDF, 43 pages

    Webs of Things in the Mind: A New Science of Evidence

    Get PDF
    A Review of Evidence and Inference for the Intelligence Analyst by David Schu

    Computability, inference and modeling in probabilistic programming

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 135-144).We investigate the class of computable probability distributions and explore the fundamental limitations of using this class to describe and compute conditional distributions. In addition to proving the existence of noncomputable conditional distributions, and thus ruling out the possibility of generic probabilistic inference algorithms (even inefficient ones), we highlight some positive results showing that posterior inference is possible in the presence of additional structure like exchangeability and noise, both of which are common in Bayesian hierarchical modeling. This theoretical work bears on the development of probabilistic programming languages (which enable the specification of complex probabilistic models) and their implementations (which can be used to perform Bayesian reasoning). The probabilistic programming approach is particularly well suited for defining infinite-dimensional, recursively-defined stochastic processes of the sort used in nonparametric Bayesian statistics. We present a new construction of the Mondrian process as a partition-valued Markov process in continuous time, which can be viewed as placing a distribution on an infinite kd-tree data structure.by Daniel M. Roy.Ph.D

    Why We Still Need the Logic of Decision

    Full text link

    Probability models for information retrieval based on divergence from randomness

    Get PDF
    This thesis devises a novel methodology based on probability theory, suitable for the construction of term-weighting models of Information Retrieval. Our term-weighting functions are created within a general framework made up of three components. Each of the three components is built independently from the others. We obtain the term-weighting functions from the general model in a purely theoretic way instantiating each component with different probability distribution forms. The thesis begins with investigating the nature of the statistical inference involved in Information Retrieval. We explore the estimation problem underlying the process of sampling. De Finetti’s theorem is used to show how to convert the frequentist approach into Bayesian inference and we display and employ the derived estimation techniques in the context of Information Retrieval. We initially pay a great attention to the construction of the basic sample spaces of Information Retrieval. The notion of single or multiple sampling from different populations in the context of Information Retrieval is extensively discussed and used through-out the thesis. The language modelling approach and the standard probabilistic model are studied under the same foundational view and are experimentally compared to the divergence-from-randomness approach. In revisiting the main information retrieval models in the literature, we show that even language modelling approach can be exploited to assign term-frequency normalization to the models of divergence from randomness. We finally introduce a novel framework for the query expansion. This framework is based on the models of divergence-from-randomness and it can be applied to arbitrary models of IR, divergence-based, language modelling and probabilistic models included. We have done a very large number of experiment and results show that the framework generates highly effective Information Retrieval models
    corecore