3,833 research outputs found
Recommended from our members
Optimal Short-Circuit Resilient Formulas
We consider fault-tolerant boolean formulas in which the output of a faulty gate is short-circuited to one of the gate\u27s inputs. A recent result by Kalai et al. [FOCS 2012] converts any boolean formula into a resilient formula of polynomial size that works correctly if less than a fraction 1/6 of the gates (on every input-to-output path) are faulty. We improve the result of Kalai et al., and show how to efficiently fortify any boolean formula against a fraction 1/5 of short-circuit gates per path, with only a polynomial blowup in size. We additionally show that it is impossible to obtain formulas with higher resilience and sub-exponential growth in size.
Towards our results, we consider interactive coding schemes when noiseless feedback is present; these produce resilient boolean formulas via a Karchmer-Wigderson relation. We develop a coding scheme that resists up to a fraction 1/5 of corrupted transmissions in each direction of the interactive channel. We further show that such a level of noise is maximal for coding schemes with sub-exponential blowup in communication. Our coding scheme takes a surprising inspiration from Blockchain technology
Simulating Auxiliary Inputs, Revisited
For any pair of correlated random variables we can think of as a
randomized function of . Provided that is short, one can make this
function computationally efficient by allowing it to be only approximately
correct. In folklore this problem is known as \emph{simulating auxiliary
inputs}. This idea of simulating auxiliary information turns out to be a
powerful tool in computer science, finding applications in complexity theory,
cryptography, pseudorandomness and zero-knowledge. In this paper we revisit
this problem, achieving the following results:
\begin{enumerate}[(a)] We discuss and compare efficiency of known results,
finding the flaw in the best known bound claimed in the TCC'14 paper "How to
Fake Auxiliary Inputs". We present a novel boosting algorithm for constructing
the simulator. Our technique essentially fixes the flaw. This boosting proof is
of independent interest, as it shows how to handle "negative mass" issues when
constructing probability measures in descent algorithms. Our bounds are much
better than bounds known so far. To make the simulator
-indistinguishable we need the complexity in time/circuit size, which is better by a
factor compared to previous bounds. In particular, with our
technique we (finally) get meaningful provable security for the EUROCRYPT'09
leakage-resilient stream cipher instantiated with a standard 256-bit block
cipher, like .Comment: Some typos present in the previous version have been correcte
Non-Malleable Codes for Small-Depth Circuits
We construct efficient, unconditional non-malleable codes that are secure
against tampering functions computed by small-depth circuits. For
constant-depth circuits of polynomial size (i.e. tampering
functions), our codes have codeword length for a -bit
message. This is an exponential improvement of the previous best construction
due to Chattopadhyay and Li (STOC 2017), which had codeword length
. Our construction remains efficient for circuit depths as
large as (indeed, our codeword length remains
, and extending our result beyond this would require
separating from .
We obtain our codes via a new efficient non-malleable reduction from
small-depth tampering to split-state tampering. A novel aspect of our work is
the incorporation of techniques from unconditional derandomization into the
framework of non-malleable reductions. In particular, a key ingredient in our
analysis is a recent pseudorandom switching lemma of Trevisan and Xue (CCC
2013), a derandomization of the influential switching lemma from circuit
complexity; the randomness-efficiency of this switching lemma translates into
the rate-efficiency of our codes via our non-malleable reduction.Comment: 26 pages, 4 figure
An Atypical Survey of Typical-Case Heuristic Algorithms
Heuristic approaches often do so well that they seem to pretty much always
give the right answer. How close can heuristic algorithms get to always giving
the right answer, without inducing seismic complexity-theoretic consequences?
This article first discusses how a series of results by Berman, Buhrman,
Hartmanis, Homer, Longpr\'{e}, Ogiwara, Sch\"{o}ening, and Watanabe, from the
early 1970s through the early 1990s, explicitly or implicitly limited how well
heuristic algorithms can do on NP-hard problems. In particular, many desirable
levels of heuristic success cannot be obtained unless severe, highly unlikely
complexity class collapses occur. Second, we survey work initiated by Goldreich
and Wigderson, who showed how under plausible assumptions deterministic
heuristics for randomized computation can achieve a very high frequency of
correctness. Finally, we consider formal ways in which theory can help explain
the effectiveness of heuristics that solve NP-hard problems in practice.Comment: This article is currently scheduled to appear in the December 2012
issue of SIGACT New
Recommended from our members
Complexity Theory
Computational Complexity Theory is the mathematical study of the intrinsic power and limitations of computational resources like time, space, or randomness. The current workshop focused on recent developments in various sub-areas including arithmetic complexity, Boolean complexity, communication complexity, cryptography, probabilistic proof systems, pseudorandomness and randomness extraction. Many of the developments are related to diverse mathematical fields such as algebraic geometry, combinatorial number theory, probability theory, representation theory, and the theory of error-correcting codes
Evaluation of A Resilience Embedded System Using Probabilistic Model-Checking
If a Micro Processor Unit (MPU) receives an external electric signal as
noise, the system function will freeze or malfunction easily. A new resilience
strategy is implemented in order to reset the MPU automatically and stop the
MPU from freezing or malfunctioning. The technique is useful for embedded
systems which work in non-human environments. However, evaluating resilience
strategies is difficult because their effectiveness depends on numerous,
complex, interacting factors.
In this paper, we use probabilistic model checking to evaluate the embedded
systems installed with the above mentioned new resilience strategy. Qualitative
evaluations are implemented with 6 PCTL formulas, and quantitative evaluations
use two kinds of evaluation. One is system failure reduction, and the other is
ADT (Average Down Time), the industry standard. Our work demonstrates the
benefits brought by the resilience strategy. Experimental results indicate that
our evaluation is cost-effective and reliable.Comment: In Proceedings ESSS 2014, arXiv:1405.055
- …