3,833 research outputs found

    Simulating Auxiliary Inputs, Revisited

    Get PDF
    For any pair (X,Z)(X,Z) of correlated random variables we can think of ZZ as a randomized function of XX. Provided that ZZ is short, one can make this function computationally efficient by allowing it to be only approximately correct. In folklore this problem is known as \emph{simulating auxiliary inputs}. This idea of simulating auxiliary information turns out to be a powerful tool in computer science, finding applications in complexity theory, cryptography, pseudorandomness and zero-knowledge. In this paper we revisit this problem, achieving the following results: \begin{enumerate}[(a)] We discuss and compare efficiency of known results, finding the flaw in the best known bound claimed in the TCC'14 paper "How to Fake Auxiliary Inputs". We present a novel boosting algorithm for constructing the simulator. Our technique essentially fixes the flaw. This boosting proof is of independent interest, as it shows how to handle "negative mass" issues when constructing probability measures in descent algorithms. Our bounds are much better than bounds known so far. To make the simulator (s,ϵ)(s,\epsilon)-indistinguishable we need the complexity O(s25ϵ2)O\left(s\cdot 2^{5\ell}\epsilon^{-2}\right) in time/circuit size, which is better by a factor ϵ2\epsilon^{-2} compared to previous bounds. In particular, with our technique we (finally) get meaningful provable security for the EUROCRYPT'09 leakage-resilient stream cipher instantiated with a standard 256-bit block cipher, like AES256\mathsf{AES256}.Comment: Some typos present in the previous version have been correcte

    Non-Malleable Codes for Small-Depth Circuits

    Get PDF
    We construct efficient, unconditional non-malleable codes that are secure against tampering functions computed by small-depth circuits. For constant-depth circuits of polynomial size (i.e. AC0\mathsf{AC^0} tampering functions), our codes have codeword length n=k1+o(1)n = k^{1+o(1)} for a kk-bit message. This is an exponential improvement of the previous best construction due to Chattopadhyay and Li (STOC 2017), which had codeword length 2O(k)2^{O(\sqrt{k})}. Our construction remains efficient for circuit depths as large as Θ(log(n)/loglog(n))\Theta(\log(n)/\log\log(n)) (indeed, our codeword length remains nk1+ϵ)n\leq k^{1+\epsilon}), and extending our result beyond this would require separating P\mathsf{P} from NC1\mathsf{NC^1}. We obtain our codes via a new efficient non-malleable reduction from small-depth tampering to split-state tampering. A novel aspect of our work is the incorporation of techniques from unconditional derandomization into the framework of non-malleable reductions. In particular, a key ingredient in our analysis is a recent pseudorandom switching lemma of Trevisan and Xue (CCC 2013), a derandomization of the influential switching lemma from circuit complexity; the randomness-efficiency of this switching lemma translates into the rate-efficiency of our codes via our non-malleable reduction.Comment: 26 pages, 4 figure

    An Atypical Survey of Typical-Case Heuristic Algorithms

    Full text link
    Heuristic approaches often do so well that they seem to pretty much always give the right answer. How close can heuristic algorithms get to always giving the right answer, without inducing seismic complexity-theoretic consequences? This article first discusses how a series of results by Berman, Buhrman, Hartmanis, Homer, Longpr\'{e}, Ogiwara, Sch\"{o}ening, and Watanabe, from the early 1970s through the early 1990s, explicitly or implicitly limited how well heuristic algorithms can do on NP-hard problems. In particular, many desirable levels of heuristic success cannot be obtained unless severe, highly unlikely complexity class collapses occur. Second, we survey work initiated by Goldreich and Wigderson, who showed how under plausible assumptions deterministic heuristics for randomized computation can achieve a very high frequency of correctness. Finally, we consider formal ways in which theory can help explain the effectiveness of heuristics that solve NP-hard problems in practice.Comment: This article is currently scheduled to appear in the December 2012 issue of SIGACT New

    Evaluation of A Resilience Embedded System Using Probabilistic Model-Checking

    Full text link
    If a Micro Processor Unit (MPU) receives an external electric signal as noise, the system function will freeze or malfunction easily. A new resilience strategy is implemented in order to reset the MPU automatically and stop the MPU from freezing or malfunctioning. The technique is useful for embedded systems which work in non-human environments. However, evaluating resilience strategies is difficult because their effectiveness depends on numerous, complex, interacting factors. In this paper, we use probabilistic model checking to evaluate the embedded systems installed with the above mentioned new resilience strategy. Qualitative evaluations are implemented with 6 PCTL formulas, and quantitative evaluations use two kinds of evaluation. One is system failure reduction, and the other is ADT (Average Down Time), the industry standard. Our work demonstrates the benefits brought by the resilience strategy. Experimental results indicate that our evaluation is cost-effective and reliable.Comment: In Proceedings ESSS 2014, arXiv:1405.055

    On the Complexity of Decomposable Randomized Encodings, Or: How Friendly Can a Garbling-Friendly PRF Be?

    Get PDF
    corecore