17 research outputs found

    Quantum randomness and value indefiniteness

    Full text link
    As computability implies value definiteness, certain sequences of quantum outcomes cannot be computable.Comment: 13 pages, revise

    Decision Making and Trade without Probabilities

    Get PDF
    This paper studies trade in a first-price sealed-bid auction where agents know only a range of possible payoffs. The setting is one in which a lemons problem arises, so that if agents have common risk preferences and common priors, then expected utility theory leads to a prediction of no trade. In contrast, we develop a model of rational non-probabilistic decision making, under which trade can occur because not bidding is a weakly dominated strategy. We use a laboratory experiment to test the predictions of both models, and also of models of expected utility with heterogeneous priors and risk preferences. We find strong support for the rational non-probabilistic model

    Generating ambiguity in the laboratory

    Get PDF
    This article develops a method for drawing samples from which it is impossible to infer any quantile or moment of the underlying distribution. The method provides researchers with a way to give subjects the experience of ambiguity. In any experiment, learning the distribution from experience is impossible for the subjects, essentially because it is impossible for the experimenter. We describe our method mathematically, illustrate it in simulations, and then test it in a laboratory experiment. Our technique does not withhold sampling information, does not assume that the subject is incapable of making statistical inferences, is replicable across experiments, and requires no special apparatus. We compare our method to the techniques used in related experiments that attempt to produce an ambiguous experience for the subjects.ambiguity; Ellsberg; Knightian uncertainty; laboratory experiments; ignorance; vagueness JEL Classications: C90; C91; C92; D80; D81

    Generating Ambiguity in the Laboratory

    Get PDF
    This article develops a method for drawing samples from which it is impossible to infer any quantile or moment of the underlying distribution. The method provides researchers with a way to give subjects the experience of ambiguity. In any experiment, learning the distribution from experience is impossible for the subjects, essentially because it is impossible for the experimenter. We describe our method mathematically, illustrate it in simulations, and then test it in a laboratory experiment. Our technique does not withhold sampling information, does not assume that the subject is incapable of making statistical inferences, is replicable across experiments, and requires no special apparatus. We compare our method to the techniques used in related experiments that attempt to produce an ambiguous experience for the subjects

    La thèse de l’hyper-calcul : enjeux et problèmes philosophiques

    Get PDF
    Dans cet article je réponds à deux questions philosophiques soule­vées par la thèse suivante appelée « thèse de l’hyper-calcul » : il est possible de construire physiquement un modèle d’hyper-calcul. La première question est liée aux enjeux de cette thèse. Puisque la construction physique d’un modèle de calcul dépasse le cadre mathématique initial de la théorie de la calculabilité, j expliquerai pourquoi il est nécessaire de construire physiquement un modèle d’hyper-calcul. La seconde question concerne le problème de la vérification : à supposer que l’on dispose d’un modèle d’hyper-calcul construit physiquement, il serait impossible de vérifier que ce modèle calcule une fonction non calculable par machine de Turing. Je proposerai une analyse de ce problème dans le but de montrer qu’il ne remet pas en cause de façon explicite la thèse de l’hyper-calcul.In this paper I answer two philosophical questions raised by the following thesis called "hypercomputation thesis": it is possible to physically build a hypercomputation model. The first question is connected to the stakes of this thesis. Since the physical construction of a computational model goes beyond the mathematical framework of the computability theory, I will explain why it is necessary to physically build a hypercomputation model. The second question concerns the verification problem raised against the hypercomputa-tion thesis: if we assume that we have a hypercomputation model physically built, it would be impossible to verify that this model is able to compute a function which is not computable by a Turing machine. I will propose an an-alyze of this problem in order to show that it does not explicitly dispute the hypercomputation thesis

    Indeterminism and Undecidability

    Get PDF
    The aim of this paper is to argue that the (alleged) indeterminism of quantum mechanics, claimed by adherents of the Copenhagen interpretation since Born (1926), can be proved from Chaitin's follow-up to Goedel's (first) incompleteness theorem. In comparison, Bell's (1964) theorem as well as the so-called free will theorem-originally due to Heywood and Redhead (1983)-left two loopholes for deterministic hidden variable theories, namely giving up either locality (more precisely: local contextuality, as in Bohmian mechanics) or free choice (i.e. uncorrelated measurement settings, as in 't Hooft's cellular automaton interpretation of quantum mechanics). The main point is that Bell and others did not exploit the full empirical content of quantum mechanics, which consists of long series of outcomes of repeated measurements (idealized as infinite binary sequences): their arguments only used the long-run relative frequencies derived from such series, and hence merely asked hidden variable theories to reproduce single-case Born probabilities defined by certain entangled bipartite states. If we idealize binary outcome strings of a fair quantum coin flip as infinite sequences, quantum mechanics predicts that these typically (i.e. almost surely) have a property called 1-randomness in logic, which is much stronger than uncomputability. This is the key to my claim, which is admittedly based on a stronger (yet compelling) notion of determinism than what is common in the literature on hidden variable theories
    corecore