29 research outputs found

    On sparseness and Turing reducibility over the reals

    Get PDF
    AbstractWe prove some results about existence of NP-complete and NP-hard (for Turing reductions) sparse sets on different settings over the real numbers

    The Computability-Theoretic Content of Emergence

    Get PDF
    In dealing with emergent phenomena, a common task is to identify useful descriptions of them in terms of the underlying atomic processes, and to extract enough computational content from these descriptions to enable predictions to be made. Generally, the underlying atomic processes are quite well understood, and (with important exceptions) captured by mathematics from which it is relatively easy to extract algorithmic con- tent. A widespread view is that the difficulty in describing transitions from algorithmic activity to the emergence associated with chaotic situations is a simple case of complexity outstripping computational resources and human ingenuity. Or, on the other hand, that phenomena transcending the standard Turing model of computation, if they exist, must necessarily lie outside the domain of classical computability theory. In this article we suggest that much of the current confusion arises from conceptual gaps and the lack of a suitably fundamental model within which to situate emergence. We examine the potential for placing emer- gent relations in a familiar context based on Turing's 1939 model for interactive computation over structures described in terms of reals. The explanatory power of this model is explored, formalising informal descrip- tions in terms of mathematical definability and invariance, and relating a range of basic scientific puzzles to results and intractable problems in computability theory

    Kolmogorov complexity and the Recursion Theorem

    Full text link
    Several classes of DNR functions are characterized in terms of Kolmogorov complexity. In particular, a set of natural numbers A can wtt-compute a DNR function iff there is a nontrivial recursive lower bound on the Kolmogorov complexity of the initial segments of A. Furthermore, A can Turing compute a DNR function iff there is a nontrivial A-recursive lower bound on the Kolmogorov complexity of the initial segements of A. A is PA-complete, that is, A can compute a {0,1}-valued DNR function, iff A can compute a function F such that F(n) is a string of length n and maximal C-complexity among the strings of length n. A solves the halting problem iff A can compute a function F such that F(n) is a string of length n and maximal H-complexity among the strings of length n. Further characterizations for these classes are given. The existence of a DNR function in a Turing degree is equivalent to the failure of the Recursion Theorem for this degree; thus the provided results characterize those Turing degrees in terms of Kolmogorov complexity which do no longer permit the usage of the Recursion Theorem.Comment: Full version of paper presented at STACS 2006, Lecture Notes in Computer Science 3884 (2006), 149--16

    Genericity and measure for exponential time

    Get PDF
    AbstractRecently, Lutz [14, 15] introduced a polynomial time bounded version of Lebesgue measure. He and others (see e.g. [11, 13–18, 20]) used this concept to investigate the quantitative structure of Exponential Time (E = DTIME(2lin)). Previously, Ambos-Spies et al. [2, 3] introduced polynomial time bounded genericity concepts and used them for the investigation of structural properties of NP (under appropriate assumptions) and E. Here we relate these concepts to each other. We show that, for any c ⩾ 1, the class of nc-generic sets has p-measure 1. This allows us to simplify and extend certain p-measure 1-results. To illustrate the power of generic sets we take the Small Span Theorem of Juedes and Lutz [11] as an example and prove a generalization for bounded query reductions

    The Machine as Data: A Computational View of Emergence and Definability

    Get PDF
    Turing’s (Proceedings of the London Mathematical Society 42:230–265, 1936) paper on computable numbers has played its role in underpinning different perspectives on the world of information. On the one hand, it encourages a digital ontology, with a perceived flatness of computational structure comprehensively hosting causality at the physical level and beyond. On the other (the main point of Turing’s paper), it can give an insight into the way in which higher order information arises and leads to loss of computational control—while demonstrating how the control can be re-established, in special circumstances, via suitable type reductions. We examine the classical computational framework more closely than is usual, drawing out lessons for the wider application of information–theoretical approaches to characterizing the real world. The problem which arises across a range of contexts is the characterizing of the balance of power between the complexity of informational structure (with emergence, chaos, randomness and ‘big data’ prominently on the scene) and the means available (simulation, codes, statistical sampling, human intuition, semantic constructs) to bring this information back into the computational fold. We proceed via appropriate mathematical modelling to a more coherent view of the computational structure of information, relevant to a wide spectrum of areas of investigation

    Degrees of Computability and Randomness

    Get PDF

    Randomness and Computability

    No full text
    This thesis establishes significant new results in the area of algorithmic randomness. These results elucidate the deep relationship between randomness and computability. A number of results focus on randomness for finite strings. Levin introduced two functions which measure the randomness of finite strings. One function is derived from a universal monotone machine and the other function is derived from an optimal computably enumerable semimeasure. Gacs proved that infinitely often, the gap between these two functions exceeds the inverse Ackermann function (applied to string length). This thesis improves this result to show that infinitely often the difference between these two functions exceeds the double logarithm. Another separation result is proved for two different kinds of process machine. Information about the randomness of finite strings can be used as a computational resource. This information is contained in the overgraph. Muchnik and Positselsky asked whether there exists an optimal monotone machine whose overgraph is not truth-table complete. This question is answered in the negative. Related results are also established. This thesis makes advances in the theory of randomness for infinite binary sequences. A variant of process machines is used to characterise computable randomness, Schnorr randomness and weak randomness. This result is extended to give characterisations of these types of randomness using truthtable reducibility. The computable Lipschitz reducibility measures both the relative randomness and the relative computational power of real numbers. It is proved that the computable Lipschitz degrees of computably enumerable sets are not dense. Infinite binary sequences can be regarded as elements of Cantor space. Most research in randomness for Cantor space has been conducted using the uniform measure. However, the study of non-computable measures has led to interesting results. This thesis shows that the two approaches that have been used to define randomness on Cantor space for non-computable measures: that of Reimann and Slaman, along with the uniform test approach first introduced by Levin and also used by Gacs, Hoyrup and Rojas, are equivalent. Levin established the existence of probability measures for which all infinite sequences are random. These measures are termed neutral measures. It is shown that every PA degree computes a neutral measure. Work of Miller is used to show that the set of atoms of a neutral measure is a countable Scott set and in fact any countable Scott set is the set of atoms of some neutral measure. Neutral measures are used to prove new results in computability theory. For example, it is shown that the low computable enumerable sets are precisely the computably enumerable sets bounded by PA degrees strictly below the halting problem. This thesis applies ideas developed in the study of randomness to computability theory by examining indifferent sets for comeager classes in Cantor space. A number of results are proved. For example, it is shown that there exist 1-generic sets that can compute their own indifferent sets

    Learning Efficient Disambiguation

    Get PDF
    This dissertation analyses the computational properties of current performance-models of natural language parsing, in particular Data Oriented Parsing (DOP), points out some of their major shortcomings and suggests suitable solutions. It provides proofs that various problems of probabilistic disambiguation are NP-Complete under instances of these performance-models, and it argues that none of these models accounts for attractive efficiency properties of human language processing in limited domains, e.g. that frequent inputs are usually processed faster than infrequent ones. The central hypothesis of this dissertation is that these shortcomings can be eliminated by specializing the performance-models to the limited domains. The dissertation addresses "grammar and model specialization" and presents a new framework, the Ambiguity-Reduction Specialization (ARS) framework, that formulates the necessary and sufficient conditions for successful specialization. The framework is instantiated into specialization algorithms and applied to specializing DOP. Novelties of these learning algorithms are 1) they limit the hypotheses-space to include only "safe" models, 2) are expressed as constrained optimization formulae that minimize the entropy of the training tree-bank given the specialized grammar, under the constraint that the size of the specialized model does not exceed a predefined maximum, and 3) they enable integrating the specialized model with the original one in a complementary manner. The dissertation provides experiments with initial implementations and compares the resulting Specialized DOP (SDOP) models to the original DOP models with encouraging results.Comment: 222 page
    corecore