31,852 research outputs found

    On the Complexity of Simulating Auxiliary Input

    Get PDF
    We construct a simulator for the simulating auxiliary input problem with complexity better than all previous results and prove the optimality up to logarithmic factors by establishing a black-box lower bound. Specifically, let ℓ\ell be the length of the auxiliary input and ϵ\epsilon be the indistinguishability parameter. Our simulator is O~(2ℓϵ−2)\tilde{O}(2^{\ell}\epsilon^{-2}) more complicated than the distinguisher family. For the lower bound, we show the relative complexity to the distinguisher of a simulator is at least Ω(2ℓϵ−2)\Omega(2^{\ell}\epsilon^{-2}) assuming the simulator is restricted to use the distinguishers in a black-box way and satisfy a mild restriction

    Computations on Nondeterministic Cellular Automata

    Get PDF
    The work is concerned with the trade-offs between the dimension and the time and space complexity of computations on nondeterministic cellular automata. It is proved, that 1). Every NCA \Cal A of dimension rr, computing a predicate PP with time complexity T(n) and space complexity S(n) can be simulated by rr-dimensional NCA with time and space complexity O(T1r+1Srr+1)O(T^{\frac{1}{r+1}} S^{\frac{r}{r+1}}) and by r+1r+1-dimensional NCA with time and space complexity O(T1/2+S)O(T^{1/2} +S). 2) For any predicate PP and integer r>1r>1 if \Cal A is a fastest rr-dimensional NCA computing PP with time complexity T(n) and space complexity S(n), then T=O(S)T= O(S). 3). If Tr,PT_{r,P} is time complexity of a fastest rr-dimensional NCA computing predicate PP then T_{r+1,P} &=O((T_{r,P})^{1-r/(r+1)^2}), T_{r-1,P} &=O((T_{r,P})^{1+2/r}). Similar problems for deterministic CA are discussed.Comment: 18 pages in AmsTex, 3 figures in PostScrip

    Simulating Auxiliary Inputs, Revisited

    Get PDF
    For any pair (X,Z)(X,Z) of correlated random variables we can think of ZZ as a randomized function of XX. Provided that ZZ is short, one can make this function computationally efficient by allowing it to be only approximately correct. In folklore this problem is known as \emph{simulating auxiliary inputs}. This idea of simulating auxiliary information turns out to be a powerful tool in computer science, finding applications in complexity theory, cryptography, pseudorandomness and zero-knowledge. In this paper we revisit this problem, achieving the following results: \begin{enumerate}[(a)] We discuss and compare efficiency of known results, finding the flaw in the best known bound claimed in the TCC'14 paper "How to Fake Auxiliary Inputs". We present a novel boosting algorithm for constructing the simulator. Our technique essentially fixes the flaw. This boosting proof is of independent interest, as it shows how to handle "negative mass" issues when constructing probability measures in descent algorithms. Our bounds are much better than bounds known so far. To make the simulator (s,ϵ)(s,\epsilon)-indistinguishable we need the complexity O(s⋅25ℓϵ−2)O\left(s\cdot 2^{5\ell}\epsilon^{-2}\right) in time/circuit size, which is better by a factor ϵ−2\epsilon^{-2} compared to previous bounds. In particular, with our technique we (finally) get meaningful provable security for the EUROCRYPT'09 leakage-resilient stream cipher instantiated with a standard 256-bit block cipher, like AES256\mathsf{AES256}.Comment: Some typos present in the previous version have been correcte

    A New Approximate Min-Max Theorem with Applications in Cryptography

    Full text link
    We propose a novel proof technique that can be applied to attack a broad class of problems in computational complexity, when switching the order of universal and existential quantifiers is helpful. Our approach combines the standard min-max theorem and convex approximation techniques, offering quantitative improvements over the standard way of using min-max theorems as well as more concise and elegant proofs

    Models to Reduce the Complexity of Simulating a Quantum Computer

    Get PDF
    Recently Quantum Computation has generated a lot of interest due to the discovery of a quantum algorithm which can factor large numbers in polynomial time. The usefulness of a quantum com puter is limited by the effect of errors. Simulation is a useful tool for determining the feasibility of quantum computers in the presence of errors. The size of a quantum computer that can be simulat ed is small because faithfully modeling a quantum computer requires an exponential amount of storage and number of operations. In this paper we define simulation models to study the feasibility of quantum computers. The most detailed of these models is based directly on a proposed imple mentation. We also define less detailed models which are exponentially less complex but still pro duce accurate results. Finally we show that the two different types of errors, decoherence and inaccuracies, are uncorrelated. This decreases the number of simulations which must be per formed.Comment: 25 page

    Reversible Simulation of Irreversible Computation by Pebble Games

    Get PDF
    Reversible simulation of irreversible algorithms is analyzed in the stylized form of a `reversible' pebble game. While such simulations incur little overhead in additional computation time, they use a large amount of additional memory space during the computation. The reacheable reversible simulation instantaneous descriptions (pebble configurations) are characterized completely. As a corollary we obtain the reversible simulation by Bennett and that among all simulations that can be modelled by the pebble game, Bennett's simulation is optimal in that it uses the least auxiliary space for the greatest number of simulated steps. One can reduce the auxiliary storage overhead incurred by the reversible simulation at the cost of allowing limited erasing leading to an irreversibility-space tradeoff. We show that in this resource-bounded setting the limited erasing needs to be performed at precise instants during the simulation. We show that the reversible simulation can be modified so that it is applicable also when the simulated computation time is unknown.Comment: 11 pages, Latex, Submitted to Physica

    An in-between "implicit" and "explicit" complexity: Automata

    Get PDF
    Implicit Computational Complexity makes two aspects implicit, by manipulating programming languages rather than models of com-putation, and by internalizing the bounds rather than using external measure. We survey how automata theory contributed to complexity with a machine-dependant with implicit bounds model

    Sublogarithmic uniform Boolean proof nets

    Full text link
    Using a proofs-as-programs correspondence, Terui was able to compare two models of parallel computation: Boolean circuits and proof nets for multiplicative linear logic. Mogbil et. al. gave a logspace translation allowing us to compare their computational power as uniform complexity classes. This paper presents a novel translation in AC0 and focuses on a simpler restricted notion of uniform Boolean proof nets. We can then encode constant-depth circuits and compare complexity classes below logspace, which were out of reach with the previous translations.Comment: In Proceedings DICE 2011, arXiv:1201.034
    • …
    corecore