847 research outputs found

    Randomness and Computability

    No full text
    This thesis establishes significant new results in the area of algorithmic randomness. These results elucidate the deep relationship between randomness and computability. A number of results focus on randomness for finite strings. Levin introduced two functions which measure the randomness of finite strings. One function is derived from a universal monotone machine and the other function is derived from an optimal computably enumerable semimeasure. Gacs proved that infinitely often, the gap between these two functions exceeds the inverse Ackermann function (applied to string length). This thesis improves this result to show that infinitely often the difference between these two functions exceeds the double logarithm. Another separation result is proved for two different kinds of process machine. Information about the randomness of finite strings can be used as a computational resource. This information is contained in the overgraph. Muchnik and Positselsky asked whether there exists an optimal monotone machine whose overgraph is not truth-table complete. This question is answered in the negative. Related results are also established. This thesis makes advances in the theory of randomness for infinite binary sequences. A variant of process machines is used to characterise computable randomness, Schnorr randomness and weak randomness. This result is extended to give characterisations of these types of randomness using truthtable reducibility. The computable Lipschitz reducibility measures both the relative randomness and the relative computational power of real numbers. It is proved that the computable Lipschitz degrees of computably enumerable sets are not dense. Infinite binary sequences can be regarded as elements of Cantor space. Most research in randomness for Cantor space has been conducted using the uniform measure. However, the study of non-computable measures has led to interesting results. This thesis shows that the two approaches that have been used to define randomness on Cantor space for non-computable measures: that of Reimann and Slaman, along with the uniform test approach first introduced by Levin and also used by Gacs, Hoyrup and Rojas, are equivalent. Levin established the existence of probability measures for which all infinite sequences are random. These measures are termed neutral measures. It is shown that every PA degree computes a neutral measure. Work of Miller is used to show that the set of atoms of a neutral measure is a countable Scott set and in fact any countable Scott set is the set of atoms of some neutral measure. Neutral measures are used to prove new results in computability theory. For example, it is shown that the low computable enumerable sets are precisely the computably enumerable sets bounded by PA degrees strictly below the halting problem. This thesis applies ideas developed in the study of randomness to computability theory by examining indifferent sets for comeager classes in Cantor space. A number of results are proved. For example, it is shown that there exist 1-generic sets that can compute their own indifferent sets

    Square Span Programs with Applications to Succinct NIZK Arguments

    Get PDF
    We use SSPs to construct succinct non-interactive zero-knowledge arguments of knowledge. For performance, our proof system is defined over Type III bilinear groups; proofs consist of just 4 group elements, verified in just 6 pairings. Concretely, using the Pinocchio libraries, we estimate that proofs will consist of 160 bytes verified in less than 6 ms

    Optimal asymptotic bounds on the oracle use in computations from Chaitin’s Omega

    Get PDF
    Chaitin’s number is the halting probability of a universal prefix-free machine, and although it depends on the underlying enumeration of prefix-free machines, it is always Turing-complete. It can be observed, in fact, that for every computably enumerable (c.e.) real �, there exists a Turing functional via which computes �, and such that the number of bits of that are needed for the computation of the first n bits of � (i.e. the use on argument n) is bounded above by a computable function h(n) = n + o (n). We characterise the asymptotic upper bounds on the use of Chaitin’s in oracle computations of halting probabilities (i.e. c.e. reals). We show that the following two conditions are equivalent for any computable function h such that h(n)

    Stretching demi-bits and nondeterministic-secure pseudorandomness

    Get PDF
    We develop the theory of cryptographic nondeterministic-secure pseudorandomness beyond the point reached by Rudich's original work [25], and apply it to draw new consequences in average-case complexity and proof complexity. Specifically, we show the following: Demi-bit stretch: Super-bits and demi-bits are variants of cryptographic pseudorandom generators which are secure against nondeterministic statistical tests [25]. They were introduced to rule out certain approaches to proving strong complexity lower bounds beyond the limitations set out by the Natural Proofs barrier of Razborov and Rudich [23]. Whether demi-bits are stretchable at all had been an open problem since their introduction. We answer this question affirmatively by showing that: every demi-bit b : {0, 1}n → {0, 1}n+1 can be stretched into sublinear many demi-bits b′: {0, 1}n → {0, 1}n+nc , for every constant 0 < c < 1. Average-case hardness: Using work by Santhanam [26], we apply our results to obtain new averagecase Kolmogorov complexity results: we show that Kpoly[n-O(1)] is zero-error average-case hard against NP/poly machines iff Kpoly[n-o(n)] is, where for a function s(n) : N → N, Kpoly[s(n)] denotes the languages of all strings x ∈ {0, 1}n for which there are (fixed) polytime Turing machines of description-length at most s(n) that output x. Characterising super-bits by nondeterministic unpredictability: In the deterministic setting, Yao [31] proved that super-polynomial hardness of pseudorandom generators is equivalent to ("nextbit") unpredictability. Unpredictability roughly means that given any strict prefix of a random string, it is infeasible to predict the next bit. We initiate the study of unpredictability beyond the deterministic setting (in the cryptographic regime), and characterise the nondeterministic hardness of generators from an unpredictability perspective. Specifically, we propose four stronger notions of unpredictability: NP/poly-unpredictability, coNP/poly-unpredictability, ∩-unpredictability and ∪unpredictability, and show that super-polynomial nondeterministic hardness of generators lies between ∩-unpredictability and ∪unpredictability. Characterising super-bits by nondeterministic hard-core predicates: We introduce a nondeterministic variant of hard-core predicates, called super-core predicates. We show that the existence of a super-bit is equivalent to the existence of a super-core of some non-shrinking function. This serves as an analogue of the equivalence between the existence of a strong pseudorandom generator and the existence of a hard-core of some one-way function [8, 12], and provides a first alternative characterisation of super-bits. We also prove that a certain class of functions, which may have hard-cores, cannot possess any super-core

    The Machine as Data: A Computational View of Emergence and Definability

    Get PDF
    Turing’s (Proceedings of the London Mathematical Society 42:230–265, 1936) paper on computable numbers has played its role in underpinning different perspectives on the world of information. On the one hand, it encourages a digital ontology, with a perceived flatness of computational structure comprehensively hosting causality at the physical level and beyond. On the other (the main point of Turing’s paper), it can give an insight into the way in which higher order information arises and leads to loss of computational control—while demonstrating how the control can be re-established, in special circumstances, via suitable type reductions. We examine the classical computational framework more closely than is usual, drawing out lessons for the wider application of information–theoretical approaches to characterizing the real world. The problem which arises across a range of contexts is the characterizing of the balance of power between the complexity of informational structure (with emergence, chaos, randomness and ‘big data’ prominently on the scene) and the means available (simulation, codes, statistical sampling, human intuition, semantic constructs) to bring this information back into the computational fold. We proceed via appropriate mathematical modelling to a more coherent view of the computational structure of information, relevant to a wide spectrum of areas of investigation

    Stochastic Model Updating with Uncertainty Quantification: An Overview and Tutorial

    Get PDF
    This paper presents an overview of the theoretic framework of stochastic model updating, including critical aspects of model parameterisation, sensitivity analysis, surrogate modelling, test-analysis correlation, parameter calibration, etc. Special attention is paid to uncertainty analysis, which extends model updating from the deterministic domain to the stochastic domain. This extension is significantly promoted by uncertainty quantification metrics, no longer describing the model parameters as unknown-but-fixed constants but random variables with uncertain distributions, i.e. imprecise probabilities. As a result, the stochastic model updating no longer aims at a single model prediction with maximum fidelity to a single experiment, but rather a reduced uncertainty space of the simulation enveloping the complete scatter of multiple experiment data. Quantification of such an imprecise probability requires a dedicated uncertainty propagation process to investigate how the uncertainty space of the input is propagated via the model to the uncertainty space of the output. The two key aspects, forward uncertainty propagation and inverse parameter calibration, along with key techniques such as P-box propagation, statistical distance-based metrics, Markov chain Monte Carlo sampling, and Bayesian updating, are elaborated in this tutorial. The overall technical framework is demonstrated by solving the NASA Multidisciplinary UQ Challenge 2014, with the purpose of encouraging the readers to reproduce the result following this tutorial. The second practical demonstration is performed on a newly designed benchmark testbed, where a series of lab-scale aeroplane models are manufactured with varying geometry sizes, following pre-defined probabilistic distributions, and tested in terms of their natural frequencies and model shapes. Such a measurement database contains naturally not only measurement errors but also, more importantly, controllable uncertainties from the pre-defined distributions of the structure geometry. Finally, open questions are discussed to fulfil the motivation of this tutorial in providing researchers, especially beginners, with further directions on stochastic model updating with uncertainty treatment perspectives
    • …
    corecore