49 research outputs found

    Monotonous betting strategies in warped casinos

    Get PDF
    Suppose that the outcomes of a roulette table are not entirely random, in the sense that there exists a successful betting strategy. Is there a successful `separable' strategy, in the sense that it does not use the winnings from betting on red in order to bet on black, and vice-versa? We study this question from an algorithmic point of view and observe that every strategy M can be replaced by a separable strategy which is computable from M and successful on any outcome-sequence where M is successful. We then consider the case of mixtures and show: (a) there exists an effective mixture of separable strategies which succeeds on every casino sequence with effective Hausdorff dimension less than 1/2; (b) there exists a casino sequence of effective Hausdorff dimension 1/2 on which no effective mixture of separable strategies succeeds. Finally we extend (b) to a more general class of strategies

    The Bolzano-Weierstrass Theorem is the Jump of Weak K\"onig's Lemma

    Full text link
    We classify the computational content of the Bolzano-Weierstrass Theorem and variants thereof in the Weihrauch lattice. For this purpose we first introduce the concept of a derivative or jump in this lattice and we show that it has some properties similar to the Turing jump. Using this concept we prove that the derivative of closed choice of a computable metric space is the cluster point problem of that space. By specialization to sequences with a relatively compact range we obtain a characterization of the Bolzano-Weierstrass Theorem as the derivative of compact choice. In particular, this shows that the Bolzano-Weierstrass Theorem on real numbers is the jump of Weak K\"onig's Lemma. Likewise, the Bolzano-Weierstrass Theorem on the binary space is the jump of the lesser limited principle of omniscience LLPO and the Bolzano-Weierstrass Theorem on natural numbers can be characterized as the jump of the idempotent closure of LLPO. We also introduce the compositional product of two Weihrauch degrees f and g as the supremum of the composition of any two functions below f and g, respectively. We can express the main result such that the Bolzano-Weierstrass Theorem is the compositional product of Weak K\"onig's Lemma and the Monotone Convergence Theorem. We also study the class of weakly limit computable functions, which are functions that can be obtained by composition of weakly computable functions with limit computable functions. We prove that the Bolzano-Weierstrass Theorem on real numbers is complete for this class. Likewise, the unique cluster point problem on real numbers is complete for the class of functions that are limit computable with finitely many mind changes. We also prove that the Bolzano-Weierstrass Theorem on real numbers and, more generally, the unbounded cluster point problem on real numbers is uniformly low limit computable. Finally, we also discuss separation techniques.Comment: This version includes an addendum by Andrea Cettolo, Matthias Schr\"oder, and the authors of the original paper. The addendum closes a gap in the proof of Theorem 11.2, which characterizes the computational content of the Bolzano-Weierstra\ss{} Theorem for arbitrary computable metric space

    Computing Measure as a Primitive Operation in Real Number Computation

    Get PDF
    We study the power of BSS-machines enhanced with abilities such as computing the measure of a BSS-decidable set or computing limits of BSS-computable converging sequences. Our variations coalesce into just two equivalence classes, each of which also can be described as a lower cone in the Weihrauch degrees. We then classify computational tasks such as computing the measure of ???-set of reals, integrating piece-wise continuous functions and recovering a continuous function from an L?([0, 1])-description. All these share the Weihrauch degree lim

    An Algorithmic Interpretation of Quantum Probability

    Get PDF
    The Everett (or relative-state, or many-worlds) interpretation of quantum mechanics has come under fire for inadequately dealing with the Born rule (the formula for calculating quantum probabilities). Numerous attempts have been made to derive this rule from the perspective of observers within the quantum wavefunction. These are not really analytic proofs, but are rather attempts to derive the Born rule as a synthetic a priori necessity, given the nature of human observers (a fact not fully appreciated even by all of those who have attempted such proofs). I show why existing attempts are unsuccessful or only partly successful, and postulate that Solomonoff's algorithmic approach to the interpretation of probability theory could clarify the problems with these approaches. The Sleeping Beauty probability puzzle is used as a springboard from which to deduce an objectivist, yet synthetic a priori framework for quantum probabilities, that properly frames the role of self-location and self-selection (anthropic) principles in probability theory. I call this framework "algorithmic synthetic unity" (or ASU). I offer no new formal proof of the Born rule, largely because I feel that existing proofs (particularly that of Gleason) are already adequate, and as close to being a formal proof as one should expect or want. Gleason's one unjustified assumption--known as noncontextuality--is, I will argue, completely benign when considered within the algorithmic framework that I propose. I will also argue that, to the extent the Born rule can be derived within ASU, there is no reason to suppose that we could not also derive all the other fundamental postulates of quantum theory, as well. There is nothing special here about the Born rule, and I suggest that a completely successful Born rule proof might only be possible once all the other postulates become part of the derivation. As a start towards this end, I show how we can already derive the essential content of the fundamental postulates of quantum mechanics, at least in outline, and especially if we allow some educated and well-motivated guesswork along the way. The result is some steps towards a coherent and consistent algorithmic interpretation of quantum mechanics

    Computably Based Locally Compact Spaces

    Full text link
    ASD (Abstract Stone Duality) is a re-axiomatisation of general topology in which the topology on a space is treated, not as an infinitary lattice, but as an exponential object of the same category as the original space, with an associated lambda-calculus. In this paper, this is shown to be equivalent to a notion of computable basis for locally compact sober spaces or locales, involving a family of open subspaces and accompanying family of compact ones. This generalises Smyth's effectively given domains and Jung's strong proximity lattices. Part of the data for a basis is the inclusion relation of compact subspaces within open ones, which is formulated in locale theory as the way-below relation on a continuous lattice. The finitary properties of this relation are characterised here, including the Wilker condition for the cover of a compact space by two open ones. The real line is used as a running example, being closely related to Scott's domain of intervals. ASD does not use the category of sets, but the full subcategory of overt discrete objects plays this role; it is an arithmetic universe (pretopos with lists). In particular, we use this subcategory to translate computable bases for classical spaces into objects in the ASD calculus.Comment: 70pp, LaTeX2e, uses diagrams.sty; Accepted for "Logical Methods in Computer Science" LMCS-2004-19; see http://www.cs.man.ac.uk/~pt/ASD for related papers. ACM-class: F.4.

    Hierarchy and Expansiveness in Two-Dimensional Subshifts of Finite Type

    Get PDF
    Subshifts are sets of conïŹgurations over an inïŹnite grid deïŹned by a set of forbidden patterns. In this thesis, we study two-dimensional subshifts ofïŹnite type (2D SFTs), where the underlying grid is Z2 and the set of for-bidden patterns is ïŹnite. We are mainly interested in the interplay between the computational power of 2D SFTs and their geometry, examined through the concept of expansive subdynamics. 2D SFTs with expansive directions form an interesting and natural class of subshifts that lie between dimensions 1 and 2. An SFT that has only one non-expansive direction is called extremely expansive. We prove that in many aspects, extremely expansive 2D SFTs display the totality of behaviours of general 2D SFTs. For example, we construct an aperiodic extremely expansive 2D SFT and we prove that the emptiness problem is undecidable even when restricted to the class of extremely expansive 2D SFTs. We also prove that every Medvedev class contains an extremely expansive 2D SFT and we provide a characterization of the sets of directions that can be the set of non-expansive directions of a 2D SFT. Finally, we prove that for every computable sequence of 2D SFTs with an expansive direction, there exists a universal object that simulates all of the elements of the sequence. We use the so called hierarchical, self-simulating or ïŹxed-point method for constructing 2D SFTs which has been previously used by GaÂŽcs, Durand, Romashchenko and Shen.Siirretty Doriast

    On New Notions of Algorithmic Dimension, Immunity, and Medvedev Degree

    Get PDF
    Ph.D

    Aspects Topologiques des Représentations en Analyse Calculable

    Get PDF
    Computable analysis provides a formalization of algorithmic computations over infinite mathematical objects. The central notion of this theory is the symbolic representation of objects, which determines the computation power of the machine, and has a direct impact on the difficulty to solve any given problem. The friction between the discrete nature of computations and the continuous nature of mathematical objects is captured by topology, which expresses the idea of finite approximations of infinite objects.We thoroughly study the multiple interactions between computations and topology, analysing the information that can be algorithmically extracted from a representation. In particular, we focus on the comparison between two representations of a single family of objects, on the precise relationship between algorithmic and topological complexity of problems, and on the relationship between finite and infinite representations.L’analyse calculable permet de formaliser le traitement algorithmique d’objets mathĂ©matiques infinis. La thĂ©orie repose sur une reprĂ©sentation symbolique des objets, dont le choix dĂ©termine les capacitĂ©s de calcul de la machine, notamment sa difficultĂ© Ă  rĂ©soudre chaque problĂšme donnĂ©. La friction entre le caractĂšre discret du calcul et la nature continue des objets est capturĂ©e par la topologie, qui exprime l’idĂ©e d’approximation finie d’objets infinis.Nous Ă©tudions en profondeur les multiples interactions entre calcul et topologie, cherchant Ă  analyser l’information qui peut ĂȘtre extraite algorithmiquement d’une reprĂ©sentation. Je me penche plus particuliĂšrement sur la comparaison entre deux reprĂ©sentations d’une mĂȘme famille d’objets, sur les liens dĂ©taillĂ©s entre complexitĂ© algorithmique et topologique des problĂšmes, ainsi que sur les relations entre reprĂ©sentations finies et infinies

    Computability in constructive type theory

    Get PDF
    We give a formalised and machine-checked account of computability theory in the Calculus of Inductive Constructions (CIC), the constructive type theory underlying the Coq proof assistant. We first develop synthetic computability theory, pioneered by Richman, Bridges, and Bauer, where one treats all functions as computable, eliminating the need for a model of computation. We assume a novel parametric axiom for synthetic computability and give proofs of results like Rice’s theorem, the Myhill isomorphism theorem, and the existence of Post’s simple and hypersimple predicates relying on no other axioms such as Markov’s principle or choice axioms. As a second step, we introduce models of computation. We give a concise overview of definitions of various standard models and contribute machine-checked simulation proofs, posing a non-trivial engineering effort. We identify a notion of synthetic undecidability relative to a fixed halting problem, allowing axiom-free machine-checked proofs of undecidability. We contribute such undecidability proofs for the historical foundational problems of computability theory which require the identification of invariants left out in the literature and now form the basis of the Coq Library of Undecidability Proofs. We then identify the weak call-by-value λ-calculus L as sweet spot for programming in a model of computation. We introduce a certifying extraction framework and analyse an axiom stating that every function of type ℕ → ℕ is L-computable.Wir behandeln eine formalisierte und maschinengeprĂŒfte Betrachtung von Berechenbarkeitstheorie im Calculus of Inductive Constructions (CIC), der konstruktiven Typtheorie die dem Beweisassistenten Coq zugrunde liegt. Wir entwickeln erst synthetische Berechenbarkeitstheorie, vorbereitet durch die Arbeit von Richman, Bridges und Bauer, wobei alle Funktionen als berechenbar behandelt werden, ohne Notwendigkeit eines Berechnungsmodells. Wir nehmen ein neues, parametrisches Axiom fĂŒr synthetische Berechenbarkeit an und beweisen Resultate wie das Theorem von Rice, das Isomorphismus Theorem von Myhill und die Existenz von Post’s simplen und hypersimplen PrĂ€dikaten ohne Annahme von anderen Axiomen wie Markov’s Prinzip oder Auswahlaxiomen. Als zweiten Schritt fĂŒhren wir Berechnungsmodelle ein. Wir geben einen kompakten Überblick ĂŒber die Definition von verschiedenen Berechnungsmodellen und erklĂ€ren maschinengeprĂŒfte Simulationsbeweise zwischen diesen Modellen, welche einen hohen Konstruktionsaufwand beinhalten. Wir identifizieren einen Begriff von synthetischer Unentscheidbarkeit relativ zu einem fixierten Halteproblem welcher axiomenfreie maschinengeprĂŒfte Unentscheidbarkeitsbeweise erlaubt. Wir erklĂ€ren solche Beweise fĂŒr die historisch grundlegenden Probleme der Berechenbarkeitstheorie, die das Identifizieren von Invarianten die normalerweise in der Literatur ausgelassen werden benötigen und nun die Basis der Coq Library of Undecidability Proofs bilden. Wir identifizieren dann den call-by-value λ-KalkĂŒl L als sweet spot fĂŒr die Programmierung in einem Berechnungsmodell. Wir fĂŒhren ein zertifizierendes Extraktionsframework ein und analysieren ein Axiom welches postuliert dass jede Funktion vom Typ N→N L-berechenbar ist
    corecore