19,470 research outputs found

    Pseudo-Deterministic Proofs

    Get PDF
    We introduce pseudo-deterministic interactive proofs (psdIP): interactive proof systems for search problems where the verifier is guaranteed with high probability to output the same output on different executions. As in the case with classical interactive proofs, the verifier is a probabilistic polynomial time algorithm interacting with an untrusted powerful prover. We view pseudo-deterministic interactive proofs as an extension of the study of pseudo-deterministic randomized polynomial time algorithms: the goal of the latter is to find canonical solutions to search problems whereas the goal of the former is to prove that a solution to a search problem is canonical to a probabilistic polynomial time verifier. Alternatively, one may think of the powerful prover as aiding the probabilistic polynomial time verifier to find canonical solutions to search problems, with high probability over the randomness of the verifier. The challenge is that pseudo-determinism should hold not only with respect to the randomness, but also with respect to the prover: a malicious prover should not be able to cause the verifier to output a solution other than the unique canonical one. The IP=PSPACE characterization implies that psdIP = IP. The challenge is to find constant round pseudo-deterministic interactive proofs for hard search problems. We show a constant round pseudo-deterministic interactive proof for the graph isomorphism problem: on any input pair of isomorphic graphs (G_0,G_1), there exist a unique isomorphism phi from G_0 to G_1 (although many isomorphism many exist) which will be output by the verifier with high probability, regardless of any dishonest prover strategy. In contrast, we show that it is unlikely that psdIP proofs with constant rounds exist for NP-complete problems by showing that if any NP-complete problem has a constant round psdIP protocol, then the polynomial hierarchy collapses

    Do evolutionary algorithms indeed require random numbers? Extended study

    Get PDF
    An inherent part of evolutionary algorithms, that are based on Darwin theory of evolution and Mendel theory of genetic heritage, are random processes. In this participation, we discuss whether are random processes really needed in evolutionary algorithms. We use n periodic deterministic processes instead of random number generators and compare performance of evolutionary algorithms powered by those processes and by pseudo-random number generators. Deterministic processes used in this participation are based on deterministic chaos and are used to generate periodical series with different length. Results presented here are numerical demonstration rather than mathematical proofs. We propose that a certain class of deterministic processes can be used instead of random number generators without lowering of evolutionary algorithms performance. Š Springer International Publishing Switzerland 2013

    On equivalences and standardization in a non-deterministic call-by-need lambda calculus

    Get PDF
    The goal of this report is to prove correctness of a considerable subset of transformations w.r.t. contextual equivalence in a an extended lambda-calculus with case, constructors, seq, let, and choice, with a simple set of reduction rules. Unfortunately, a direct proof appears to be impossible. The correctness proof is by defining another calculus comprising the complex variants of copy, case-reduction and seq-reductions that use variablebinding chains. This complex calculus has well-behaved diagrams and allows a proof that of correctness of transformations, and also that the simple calculus defines an equivalent contextual order

    How to prove similarity a precongruence in non-deterministic call-by-need lambda calculi

    Get PDF
    Extending the method of Howe, we establish a large class of untyped higher-order calculi, in particular such with call-by-need evaluation, where similarity, also called applicative simulation, can be used as a proof tool for showing contextual preorder. The paper also demonstrates that Mann’s approach using an intermediate “approximation” calculus scales up well from a basic call-by-need non-deterministic lambdacalculus to more expressive lambda calculi. I.e., it is demonstrated, that after transferring the contextual preorder of a non-deterministic call-byneed lambda calculus to its corresponding approximation calculus, it is possible to apply Howe’s method to show that similarity is a precongruence. The transfer is not treated in this paper. The paper also proposes an optimization of the similarity-test by cutting off redundant computations. Our results also applies to deterministic or non-deterministic call-by-value lambda-calculi, and improves upon previous work insofar as it is proved that only closed values are required as arguments for similaritytesting instead of all closed expressions

    A formally verified compiler back-end

    Get PDF
    This article describes the development and formal verification (proof of semantic preservation) of a compiler back-end from Cminor (a simple imperative intermediate language) to PowerPC assembly code, using the Coq proof assistant both for programming the compiler and for proving its correctness. Such a verified compiler is useful in the context of formal methods applied to the certification of critical software: the verification of the compiler guarantees that the safety properties proved on the source code hold for the executable compiled code as well

    Pseudorandomness for Approximate Counting and Sampling

    Get PDF
    We study computational procedures that use both randomness and nondeterminism. The goal of this paper is to derandomize such procedures under the weakest possible assumptions. Our main technical contribution allows one to “boost” a given hardness assumption: We show that if there is a problem in EXP that cannot be computed by poly-size nondeterministic circuits then there is one which cannot be computed by poly-size circuits that make non-adaptive NP oracle queries. This in particular shows that the various assumptions used over the last few years by several authors to derandomize Arthur-Merlin games (i.e., show AM = NP) are in fact all equivalent. We also define two new primitives that we regard as the natural pseudorandom objects associated with approximate counting and sampling of NP-witnesses. We use the “boosting” theorem and hashing techniques to construct these primitives using an assumption that is no stronger than that used to derandomize AM. We observe that Cai's proof that S_2^P ⊆ PP⊆(NP) and the learning algorithm of Bshouty et al. can be seen as reductions to sampling that are not probabilistic. As a consequence they can be derandomized under an assumption which is weaker than the assumption that was previously known to suffice
    • …
    corecore