13,278 research outputs found

    Effective Choice and Boundedness Principles in Computable Analysis

    Full text link
    In this paper we study a new approach to classify mathematical theorems according to their computational content. Basically, we are asking the question which theorems can be continuously or computably transferred into each other? For this purpose theorems are considered via their realizers which are operations with certain input and output data. The technical tool to express continuous or computable relations between such operations is Weihrauch reducibility and the partially ordered degree structure induced by it. We have identified certain choice principles which are cornerstones among Weihrauch degrees and it turns out that certain core theorems in analysis can be classified naturally in this structure. In particular, we study theorems such as the Intermediate Value Theorem, the Baire Category Theorem, the Banach Inverse Mapping Theorem and others. We also explore how existing classifications of the Hahn-Banach Theorem and Weak K"onig's Lemma fit into this picture. We compare the results of our classification with existing classifications in constructive and reverse mathematics and we claim that in a certain sense our classification is finer and sheds some new light on the computational content of the respective theorems. We develop a number of separation techniques based on a new parallelization principle, on certain invariance properties of Weihrauch reducibility, on the Low Basis Theorem of Jockusch and Soare and based on the Baire Category Theorem. Finally, we present a number of metatheorems that allow to derive upper bounds for the classification of the Weihrauch degree of many theorems and we discuss the Brouwer Fixed Point Theorem as an example

    On algorithm and robustness in a non-standard sense

    Get PDF
    In this paper, we investigate the invariance properties, i.e. robust- ness, of phenomena related to the notions of algorithm, finite procedure and explicit construction. First of all, we provide two examples of objects for which small changes completely change their (non)computational behavior. We then isolate robust phenomena in two disciplines related to computability

    On the Invariance of G\"odel's Second Theorem with regard to Numberings

    Get PDF
    The prevalent interpretation of G\"odel's Second Theorem states that a sufficiently adequate and consistent theory does not prove its consistency. It is however not entirely clear how to justify this informal reading, as the formulation of the underlying mathematical theorem depends on several arbitrary formalisation choices. In this paper I examine the theorem's dependency regarding G\"odel numberings. I introduce deviant numberings, yielding provability predicates satisfying L\"ob's conditions, which result in provable consistency sentences. According to the main result of this paper however, these "counterexamples" do not refute the theorem's prevalent interpretation, since once a natural class of admissible numberings is singled out, invariance is maintained.Comment: Forthcoming in The Review of Symbolic Logi

    Kolmogorov Complexity in perspective. Part I: Information Theory and Randomnes

    Get PDF
    We survey diverse approaches to the notion of information: from Shannon entropy to Kolmogorov complexity. Two of the main applications of Kolmogorov complexity are presented: randomness and classification. The survey is divided in two parts in the same volume. Part I is dedicated to information theory and the mathematical formalization of randomness based on Kolmogorov complexity. This last application goes back to the 60's and 70's with the work of Martin-L\"of, Schnorr, Chaitin, Levin, and has gained new impetus in the last years.Comment: 40 page

    On Universal Prediction and Bayesian Confirmation

    Get PDF
    The Bayesian framework is a well-studied and successful framework for inductive reasoning, which includes hypothesis testing and confirmation, parameter estimation, sequence prediction, classification, and regression. But standard statistical guidelines for choosing the model class and prior are not always available or fail, in particular in complex situations. Solomonoff completed the Bayesian framework by providing a rigorous, unique, formal, and universal choice for the model class and the prior. We discuss in breadth how and in which sense universal (non-i.i.d.) sequence prediction solves various (philosophical) problems of traditional Bayesian sequence prediction. We show that Solomonoff's model possesses many desirable properties: Strong total and weak instantaneous bounds, and in contrast to most classical continuous prior densities has no zero p(oste)rior problem, i.e. can confirm universal hypotheses, is reparametrization and regrouping invariant, and avoids the old-evidence and updating problem. It even performs well (actually better) in non-computable environments.Comment: 24 page

    Bad Universal Priors and Notions of Optimality

    Get PDF
    A big open question of algorithmic information theory is the choice of the universal Turing machine (UTM). For Kolmogorov complexity and Solomonoff induction we have invariance theorems: the choice of the UTM changes bounds only by a constant. For the universally intelligent agent AIXI (Hutter, 2005) no invariance theorem is known. Our results are entirely negative: we discuss cases in which unlucky or adversarial choices of the UTM cause AIXI to misbehave drastically. We show that Legg-Hutter intelligence and thus balanced Pareto optimality is entirely subjective, and that every policy is Pareto optimal in the class of all computable environments. This undermines all existing optimality properties for AIXI. While it may still serve as a gold standard for AI, our results imply that AIXI is a relative theory, dependent on the choice of the UTM.Comment: COLT 201
    • …
    corecore