32 research outputs found

    Revisiting the Sanders-Freiman-Ruzsa Theorem in Fpn\mathbb{F}_p^n and its Application to Non-malleable Codes

    Full text link
    Non-malleable codes (NMCs) protect sensitive data against degrees of corruption that prohibit error detection, ensuring instead that a corrupted codeword decodes correctly or to something that bears little relation to the original message. The split-state model, in which codewords consist of two blocks, considers adversaries who tamper with either block arbitrarily but independently of the other. The simplest construction in this model, due to Aggarwal, Dodis, and Lovett (STOC'14), was shown to give NMCs sending k-bit messages to O(k7)O(k^7)-bit codewords. It is conjectured, however, that the construction allows linear-length codewords. Towards resolving this conjecture, we show that the construction allows for code-length O(k5)O(k^5). This is achieved by analysing a special case of Sanders's Bogolyubov-Ruzsa theorem for general Abelian groups. Closely following the excellent exposition of this result for the group F2n\mathbb{F}_2^n by Lovett, we expose its dependence on pp for the group Fpn\mathbb{F}_p^n, where pp is a prime

    Private and Secure Post-Quantum Verifiable Random Function with NIZK Proof and Ring-LWE Encryption in Blockchain

    Full text link
    We present a secure and private blockchain-based Verifiable Random Function (VRF) scheme addressing some limitations of classical VRF constructions. Given the imminent quantum computing adversarial scenario, conventional cryptographic methods face vulnerabilities. To enhance our VRF's secure randomness, we adopt post-quantum Ring-LWE encryption for synthesizing pseudo-random sequences. Considering computational costs and resultant on-chain gas costs, we suggest a bifurcated architecture for VRF design, optimizing interactions between on-chain and off-chain. Our approach employs a secure ring signature supported by NIZK proof and a delegated key generation method, inspired by the Chaum-Pedersen equality proof and the Fiat-Shamir Heuristic. Our VRF scheme integrates multi-party computation (MPC) with blockchain-based decentralized identifiers (DID), ensuring both security and randomness. We elucidate the security and privacy aspects of our VRF scheme, analyzing temporal and spatial complexities. We also approximate the entropy of the VRF scheme and detail its implementation in a Solidity contract. Also, we delineate a method for validating the VRF's proof, matching for the contexts requiring both randomness and verification. Conclusively, using the NIST SP800-22 of the statistical randomness test suite, our results exhibit a 98.86% pass rate over 11 test cases, with an average p-value of 0.5459 from 176 total tests.Comment: 21 pages, 5 figures, In the 2023 Proceedings of International Conference on Cryptography and Blockchai

    On Basing Auxiliary-Input Cryptography on NP-Hardness via Nonadaptive Black-Box Reductions

    Get PDF
    Constructing one-way functions based on NP-hardness is a central challenge in theoretical computer science. Unfortunately, Akavia et al. [Akavia et al., 2006] presented strong evidence that a nonadaptive black-box (BB) reduction is insufficient to solve this challenge. However, should we give up such a central proof technique even for an intermediate step? In this paper, we turn our eyes from standard cryptographic primitives to weaker cryptographic primitives allowed to take auxiliary-input and continue to explore the capability of nonadaptive BB reductions to base auxiliary-input primitives on NP-hardness. Specifically, we prove the followings: - if we base an auxiliary-input pseudorandom generator (AIPRG) on NP-hardness via a nonadaptive BB reduction, then the polynomial hierarchy collapses; - if we base an auxiliary-input one-way function (AIOWF) or auxiliary-input hitting set generator (AIHSG) on NP-hardness via a nonadaptive BB reduction, then an (i.o.-)one-way function also exists based on NP-hardness (via an adaptive BB reduction). These theorems extend our knowledge on nonadaptive BB reductions out of the current worst-to-average framework. The first result provides new evidence that nonadaptive BB reductions are insufficient to base AIPRG on NP-hardness. The second result also yields a weaker but still surprising consequence of nonadaptive BB reductions, i.e., a one-way function based on NP-hardness. In fact, the second result is interpreted in the following two opposite ways. Pessimistically, it shows that basing AIOWF or AIHSG on NP-hardness via nonadaptive BB reductions is harder than constructing a one-way function based on NP-hardness, which can be regarded as a negative result. Note that AIHSG is a weak primitive implied even by the hardness of learning; thus, this pessimistic view provides conceptually stronger limitations than the currently known limitations on nonadaptive BB reductions. Optimistically, it offers a new hope: breakthrough construction of auxiliary-input primitives might also provide construction standard cryptographic primitives. This optimistic view enhances the significance of further investigation on constructing auxiliary-input or other intermediate cryptographic primitives instead of standard cryptographic primitives

    Non-Malleable Codes for Small-Depth Circuits

    Get PDF
    We construct efficient, unconditional non-malleable codes that are secure against tampering functions computed by small-depth circuits. For constant-depth circuits of polynomial size (i.e. AC0\mathsf{AC^0} tampering functions), our codes have codeword length n=k1+o(1)n = k^{1+o(1)} for a kk-bit message. This is an exponential improvement of the previous best construction due to Chattopadhyay and Li (STOC 2017), which had codeword length 2O(k)2^{O(\sqrt{k})}. Our construction remains efficient for circuit depths as large as Θ(log(n)/loglog(n))\Theta(\log(n)/\log\log(n)) (indeed, our codeword length remains nk1+ϵ)n\leq k^{1+\epsilon}), and extending our result beyond this would require separating P\mathsf{P} from NC1\mathsf{NC^1}. We obtain our codes via a new efficient non-malleable reduction from small-depth tampering to split-state tampering. A novel aspect of our work is the incorporation of techniques from unconditional derandomization into the framework of non-malleable reductions. In particular, a key ingredient in our analysis is a recent pseudorandom switching lemma of Trevisan and Xue (CCC 2013), a derandomization of the influential switching lemma from circuit complexity; the randomness-efficiency of this switching lemma translates into the rate-efficiency of our codes via our non-malleable reduction.Comment: 26 pages, 4 figure

    Transparent Error Correcting in a Computationally Bounded World

    Get PDF
    We construct uniquely decodable codes against channels which are computationally bounded. Our construction requires only a public-coin (transparent) setup. All prior work for such channels either required a setup with secret keys and states, could not achieve unique decoding, or got worse rates (for a given bound on codeword corruptions). On the other hand, our construction relies on a strong cryptographic hash function with security properties that we only instantiate in the random oracle model

    On Basing Search SIVP on NP-Hardness

    Get PDF
    The possibility of basing cryptography on the minimal assumption NP\nsubseteqBPP is at the very heart of complexity-theoretic cryptography. The closest we have gotten so far is lattice-based cryptography whose average-case security is based on the worst-case hardness of approximate shortest vector problems on integer lattices. The state-of-the-art is the construction of a one-way function (and collision-resistant hash function) based on the hardness of the O~(n)\tilde{O}(n)-approximate shortest independent vector problem SIVPO~(n)\text{SIVP}_{\tilde O(n)}. Although SIVP is NP-hard in its exact version, Guruswami et al (CCC 2004) showed that gapSIVPn/logn\text{gapSIVP}_{\sqrt{n/\log n}} is in NP\capcoAM and thus unlikely to be NP-hard. Indeed, any language that can be reduced to gapSIVPO~(n)\text{gapSIVP}_{\tilde O(\sqrt n)} (under general probabilistic polynomial-time adaptive reductions) is in AM\capcoAM by the results of Peikert and Vaikuntanathan (CRYPTO 2008) and Mahmoody and Xiao (CCC 2010). However, none of these results apply to reductions to search problems, still leaving open a ray of hope: can NP be reduced to solving search SIVP with approximation factor O~(n)\tilde O(n)? We eliminate such possibility, by showing that any language that can be reduced to solving search SIVPγ\text{SIVP}_{\gamma} with any approximation factor γ(n)=ω(nlogn)\gamma(n) = \omega(n\log n) lies in AM intersect coAM. As a side product, we show that any language that can be reduced to discrete Gaussian sampling with parameter O~(n)λn\tilde O(\sqrt n)\cdot\lambda_n lies in AM intersect coAM

    Dimension-Preserving Reductions from LWE to LWR

    Get PDF
    The Learning with Rounding (LWR) problem was first introduced by Banerjee, Peikert, and Rosen (Eurocrypt 2012) as a \emph{derandomized} form of the standard Learning with Errors (LWE) problem. The original motivation of LWR was as a building block for constructing efficient, low-depth pseudorandom functions on lattices. It has since been used to construct reusable computational extractors, lossy trapdoor functions, and deterministic encryption. In this work we show two (incomparable) dimension-preserving reductions from LWE to LWR in the case of a \emph{polynomial-size modulus}. Prior works either required a superpolynomial modulus qq, or lost at least a factor log(q)\log(q) in the dimension of the reduction. A direct consequence of our improved reductions is an improvement in parameters (i.e. security and efficiency) for each of the known applications of poly-modulus LWR. Our results directly generalize to the ring setting. Indeed, our formal analysis is performed over ``module lattices,\u27\u27 as defined by Langlois and Stehlé (DCC 2015), which generalize both the general lattice setting of LWE and the ideal lattice setting of RLWE as the single notion M-LWE. We hope that taking this broader perspective will lead to further insights of independent interest

    From Laconic Zero-Knowledge to Public-Key Cryptography

    Get PDF
    Since its inception, public-key encryption (PKE) has been one of the main cornerstones of cryptography. A central goal in cryptographic research is to understand the foundations of public-key encryption and in particular, base its existence on a natural and generic complexity-theoretic assumption. An intriguing candidate for such an assumption is the existence of a cryptographically hard language in the intersection of NP and SZK. In this work we prove that public-key encryption can be based on the foregoing assumption, as long as the (honest) prover in the zero-knowledge protocol is efficient and laconic. That is, messages that the prover sends should be efficiently computable (given the NP witness) and short (i.e., of sufficiently sub-logarithmic length). Actually, our result is stronger and only requires the protocol to be zero-knowledge for an honest-verifier and sound against computationally bounded cheating provers. Languages in NP with such laconic zero-knowledge protocols are known from a variety of computational assumptions (e.g., Quadratic Residuocity, Decisional Diffie-Hellman, Learning with Errors, etc.). Thus, our main result can also be viewed as giving a unifying framework for constructing PKE which, in particular, captures many of the assumptions that were already known to yield PKE. We also show several extensions of our result. First, that a certain weakening of our assumption on laconic zero-knowledge is actually equivalent to PKE, thereby giving a complexity-theoretic characterization of PKE. Second, a mild strengthening of our assumption also yields a (2-message) oblivious transfer protocol
    corecore