67 research outputs found

    Algebraic Methods in Computational Complexity

    Get PDF
    Computational Complexity is concerned with the resources that are required for algorithms to detect properties of combinatorial objects and structures. It has often proven true that the best way to argue about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The Razborov-Smolensky polynomial-approximation method for proving constant-depth circuit lower bounds, the PCP characterization of NP, and the Agrawal-Kayal-Saxena polynomial-time primality test are some of the most prominent examples. In some of the most exciting recent progress in Computational Complexity the algebraic theme still plays a central role. There have been significant recent advances in algebraic circuit lower bounds, and the so-called chasm at depth 4 suggests that the restricted models now being considered are not so far from ones that would lead to a general result. There have been similar successes concerning the related problems of polynomial identity testing and circuit reconstruction in the algebraic model (and these are tied to central questions regarding the power of randomness in computation). Also the areas of derandomization and coding theory have experimented important advances. The seminar aimed to capitalize on recent progress and bring together researchers who are using a diverse array of algebraic methods in a variety of settings. Researchers in these areas are relying on ever more sophisticated and specialized mathematics and the goal of the seminar was to play an important role in educating a diverse community about the latest new techniques

    Derandomization with Minimal Memory Footprint

    Get PDF
    Existing proofs that deduce BPL = ? from circuit lower bounds convert randomized algorithms into deterministic algorithms with large constant overhead in space. We study space-bounded derandomization with minimal footprint, and ask what is the minimal possible space overhead for derandomization. We show that BPSPACE[S] ? DSPACE[c ? S] for c ? 2, assuming space-efficient cryptographic PRGs, and, either: (1) lower bounds against bounded-space algorithms with advice, or: (2) lower bounds against certain uniform compression algorithms. Under additional assumptions regarding the power of catalytic computation, in a new setting of parameters that was not studied before, we are even able to get c ? 1. Our results are constructive: Given a candidate hard function (and a candidate cryptographic PRG) we show how to transform the randomized algorithm into an efficient deterministic one. This follows from new PRGs and targeted PRGs for space-bounded algorithms, which we combine with novel space-efficient evaluation methods. A central ingredient in all our constructions is hardness amplification reductions in logspace-uniform TC?, that were not known before

    On Hardness Assumptions Needed for "Extreme High-End" PRGs and Fast Derandomization

    Get PDF
    The hardness vs. randomness paradigm aims to explicitly construct pseudorandom generators G:{0,1}^r ? {0,1}^m that fool circuits of size m, assuming the existence of explicit hard functions. A "high-end PRG" with seed length r = O(log m) (implying BPP=P) was achieved in a seminal work of Impagliazzo and Wigderson (STOC 1997), assuming the high-end hardness assumption: there exist constants 0 < ? < 1 < B, and functions computable in time 2^{B ? n} that cannot be computed by circuits of size 2^{? ? n}. Recently, motivated by fast derandomization of randomized algorithms, Doron et al. (FOCS 2020) and Chen and Tell (STOC 2021), construct "extreme high-end PRGs" with seed length r = (1+o(1))? log m, under qualitatively stronger assumptions. We study whether extreme high-end PRGs can be constructed from the corresponding hardness assumption in which ? = 1-o(1) and B = 1+o(1), which we call the extreme high-end hardness assumption. We give a partial negative answer: - The construction of Doron et al. composes a PEG (pseudo-entropy generator) with an extractor. The PEG is constructed starting from a function that is hard for MA-type circuits. We show that black-box PEG constructions from the extreme high-end hardness assumption must have large seed length (and so cannot be used to obtain extreme high-end PRGs by applying an extractor). To prove this, we establish a new property of (general) black-box PRG constructions from hard functions: it is possible to fix many output bits of the construction while fixing few bits of the hard function. This property distinguishes PRG constructions from typical extractor constructions, and this may explain why it is difficult to design PRG constructions. - The construction of Chen and Tell composes two PRGs: G?:{0,1}^{(1+o(1)) ? log m} ? {0,1}^{r? = m^{?(1)}} and G?:{0,1}^{r?} ? {0,1}^m. The first PRG is constructed from the extreme high-end hardness assumption, and the second PRG needs to run in time m^{1+o(1)}, and is constructed assuming one way functions. We show that in black-box proofs of hardness amplification to 1/2+1/m, reductions must make ?(m) queries, even in the extreme high-end. Known PRG constructions from hard functions are black-box and use (or imply) hardness amplification, and so cannot be used to construct a PRG G? from the extreme high-end hardness assumption. The new feature of our hardness amplification result is that it applies even to the extreme high-end setting of parameters, whereas past work does not. Our techniques also improve recent lower bounds of Ron-Zewi, Shaltiel and Varma (ITCS 2021) on the number of queries of local list-decoding algorithms

    Query Complexity Lower Bounds for Local List-Decoding and Hard-Core Predicates (Even for Small Rate and Huge Lists)

    Get PDF

    Universal Reductions: Reductions Relative to Stateful Oracles

    Get PDF
    We define a framework for analyzing the security of cryptographic protocols that makes minimal assumptions about what a realistic model of computation is . In particular, whereas classical models assume that the attacker is a (perhaps non-uniform) probabilistic polynomial-time algorithm, and more recent definitional approaches also consider quantum polynomial-time algorithms, we consider an approach that is more agnostic to what computational model is physically realizable. Our notion of universal reductions models attackers as PPT algorithms having access to some arbitrary unbounded stateful Nature that cannot be rewound or restarted when queried multiple times. We also consider a more relaxed notion of universal reductions w.r.t. time-evolving, kk-window, Natures that makes restrictions on Nature - roughly speaking, Nature\u27s behavior may depend on number of messages it has received and the content of the last k(λ)k(\lambda)-messages (but not on older messages). We present both impossibility results and general feasibility results for our notions, indicating to what extent the extended Church-Turing hypotheses are needed for a well-founded theory of Cryptography

    Average-Case Complexity

    Full text link
    We survey the average-case complexity of problems in NP. We discuss various notions of good-on-average algorithms, and present completeness results due to Impagliazzo and Levin. Such completeness results establish the fact that if a certain specific (but somewhat artificial) NP problem is easy-on-average with respect to the uniform distribution, then all problems in NP are easy-on-average with respect to all samplable distributions. Applying the theory to natural distributional problems remain an outstanding open question. We review some natural distributional problems whose average-case complexity is of particular interest and that do not yet fit into this theory. A major open question whether the existence of hard-on-average problems in NP can be based on the P≠\neqNP assumption or on related worst-case assumptions. We review negative results showing that certain proof techniques cannot prove such a result. While the relation between worst-case and average-case complexity for general NP problems remains open, there has been progress in understanding the relation between different ``degrees'' of average-case complexity. We discuss some of these ``hardness amplification'' results

    On Efficient Zero-Knowledge Arguments

    Get PDF

    Non-Malleable Codes for Bounded Polynomial-Depth Tampering

    Get PDF
    Non-malleable codes allow one to encode data in such a way that, after tampering, the modified codeword is guaranteed to decode to either the original message, or a completely unrelated one. Since the introduction of the notion by Dziembowski, Pietrzak, and Wichs (ICS \u2710 and J. ACM \u2718), a large body of work has focused on realizing such coding schemes secure against various classes of tampering functions. It is well known that there is no efficient non-malleable code secure against all polynomial size tampering functions. Nevertheless, non-malleable codes in the plain model (i.e., no trusted setup) secure against bounded\textit{bounded} polynomial size tampering are not known and obtaining such a code has been a major open problem. We present the first construction of a non-malleable code secure against all\textit{all} polynomial size tampering functions that have bounded polynomial depth\textit{bounded polynomial depth}. This is an even larger class than all bounded polynomial size\textit{size} functions and, in particular, we capture all functions in non-uniform NC\mathbf{NC} (and much more). Our construction is in the plain model (i.e., no trusted setup) and relies on several cryptographic assumptions such as keyless hash functions, time-lock puzzles, as well as other standard assumptions. Additionally, our construction has several appealing properties: the complexity of encoding is independent of the class of tampering functions and we obtain sub-exponentially small error

    Non-Malleable Codes Against Bounded Polynomial Time Tampering

    Get PDF
    We construct efficient non-malleable codes (NMC) that are (computationally) secure against tampering by functions computable in any fixed polynomial time. Our construction is in the plain (no-CRS) model and requires the assumptions that (1) E\mathbf{E} is hard for NP\mathbf{NP} circuits of some exponential 2βn2^{\beta n} (β>0\beta>0) size (widely used in the derandomization literature), (2) sub-exponential trapdoor permutations exist, and (3) P\mathbf{P} certificates with sub-exponential soundness exist. While it is impossible to construct NMC secure against arbitrary polynomial-time tampering (Dziembowski, Pietrzak, Wichs, ICS \u2710), the existence of NMC secure against O(nc)O(n^c)-time tampering functions (for any fixed cc), was shown (Cheraghchi and Guruswami, ITCS \u2714) via a probabilistic construction. An explicit construction was given (Faust, Mukherjee, Venturi, Wichs, Eurocrypt \u2714) assuming an untamperable CRS with length longer than the runtime of the tampering function. In this work, we show that under computational assumptions, we can bypass these limitations. Specifically, under the assumptions listed above, we obtain non-malleable codes in the plain model against O(nc)O(n^c)-time tampering functions (for any fixed cc), with codeword length independent of the tampering time bound. Our new construction of NMC draws a connection with non-interactive non-malleable commitments. In fact, we show that in the NMC setting, it suffices to have a much weaker notion called quasi non-malleable commitments---these are non-interactive, non-malleable commitments in the plain model, in which the adversary runs in O(nc)O(n^c)-time, whereas the honest parties may run in longer (polynomial) time. We then construct a 4-tag quasi non-malleable commitment from any sub-exponential OWF and the assumption that E\mathbf{E} is hard for some exponential size NP\mathbf{NP}-circuits, and use tag amplification techniques to support an exponential number of tags
    • …
    corecore