32 research outputs found
Revisiting the Sanders-Freiman-Ruzsa Theorem in and its Application to Non-malleable Codes
Non-malleable codes (NMCs) protect sensitive data against degrees of
corruption that prohibit error detection, ensuring instead that a corrupted
codeword decodes correctly or to something that bears little relation to the
original message. The split-state model, in which codewords consist of two
blocks, considers adversaries who tamper with either block arbitrarily but
independently of the other. The simplest construction in this model, due to
Aggarwal, Dodis, and Lovett (STOC'14), was shown to give NMCs sending k-bit
messages to -bit codewords. It is conjectured, however, that the
construction allows linear-length codewords. Towards resolving this conjecture,
we show that the construction allows for code-length . This is achieved
by analysing a special case of Sanders's Bogolyubov-Ruzsa theorem for general
Abelian groups. Closely following the excellent exposition of this result for
the group by Lovett, we expose its dependence on for the
group , where is a prime
Private and Secure Post-Quantum Verifiable Random Function with NIZK Proof and Ring-LWE Encryption in Blockchain
We present a secure and private blockchain-based Verifiable Random Function
(VRF) scheme addressing some limitations of classical VRF constructions. Given
the imminent quantum computing adversarial scenario, conventional cryptographic
methods face vulnerabilities. To enhance our VRF's secure randomness, we adopt
post-quantum Ring-LWE encryption for synthesizing pseudo-random sequences.
Considering computational costs and resultant on-chain gas costs, we suggest a
bifurcated architecture for VRF design, optimizing interactions between
on-chain and off-chain. Our approach employs a secure ring signature supported
by NIZK proof and a delegated key generation method, inspired by the
Chaum-Pedersen equality proof and the Fiat-Shamir Heuristic. Our VRF scheme
integrates multi-party computation (MPC) with blockchain-based decentralized
identifiers (DID), ensuring both security and randomness. We elucidate the
security and privacy aspects of our VRF scheme, analyzing temporal and spatial
complexities. We also approximate the entropy of the VRF scheme and detail its
implementation in a Solidity contract. Also, we delineate a method for
validating the VRF's proof, matching for the contexts requiring both randomness
and verification. Conclusively, using the NIST SP800-22 of the statistical
randomness test suite, our results exhibit a 98.86% pass rate over 11 test
cases, with an average p-value of 0.5459 from 176 total tests.Comment: 21 pages, 5 figures, In the 2023 Proceedings of International
Conference on Cryptography and Blockchai
On Basing Auxiliary-Input Cryptography on NP-Hardness via Nonadaptive Black-Box Reductions
Constructing one-way functions based on NP-hardness is a central challenge in theoretical computer science. Unfortunately, Akavia et al. [Akavia et al., 2006] presented strong evidence that a nonadaptive black-box (BB) reduction is insufficient to solve this challenge. However, should we give up such a central proof technique even for an intermediate step?
In this paper, we turn our eyes from standard cryptographic primitives to weaker cryptographic primitives allowed to take auxiliary-input and continue to explore the capability of nonadaptive BB reductions to base auxiliary-input primitives on NP-hardness. Specifically, we prove the followings:
- if we base an auxiliary-input pseudorandom generator (AIPRG) on NP-hardness via a nonadaptive BB reduction, then the polynomial hierarchy collapses;
- if we base an auxiliary-input one-way function (AIOWF) or auxiliary-input hitting set generator (AIHSG) on NP-hardness via a nonadaptive BB reduction, then an (i.o.-)one-way function also exists based on NP-hardness (via an adaptive BB reduction).
These theorems extend our knowledge on nonadaptive BB reductions out of the current worst-to-average framework. The first result provides new evidence that nonadaptive BB reductions are insufficient to base AIPRG on NP-hardness. The second result also yields a weaker but still surprising consequence of nonadaptive BB reductions, i.e., a one-way function based on NP-hardness. In fact, the second result is interpreted in the following two opposite ways. Pessimistically, it shows that basing AIOWF or AIHSG on NP-hardness via nonadaptive BB reductions is harder than constructing a one-way function based on NP-hardness, which can be regarded as a negative result. Note that AIHSG is a weak primitive implied even by the hardness of learning; thus, this pessimistic view provides conceptually stronger limitations than the currently known limitations on nonadaptive BB reductions. Optimistically, it offers a new hope: breakthrough construction of auxiliary-input primitives might also provide construction standard cryptographic primitives. This optimistic view enhances the significance of further investigation on constructing auxiliary-input or other intermediate cryptographic primitives instead of standard cryptographic primitives
Non-Malleable Codes for Small-Depth Circuits
We construct efficient, unconditional non-malleable codes that are secure
against tampering functions computed by small-depth circuits. For
constant-depth circuits of polynomial size (i.e. tampering
functions), our codes have codeword length for a -bit
message. This is an exponential improvement of the previous best construction
due to Chattopadhyay and Li (STOC 2017), which had codeword length
. Our construction remains efficient for circuit depths as
large as (indeed, our codeword length remains
, and extending our result beyond this would require
separating from .
We obtain our codes via a new efficient non-malleable reduction from
small-depth tampering to split-state tampering. A novel aspect of our work is
the incorporation of techniques from unconditional derandomization into the
framework of non-malleable reductions. In particular, a key ingredient in our
analysis is a recent pseudorandom switching lemma of Trevisan and Xue (CCC
2013), a derandomization of the influential switching lemma from circuit
complexity; the randomness-efficiency of this switching lemma translates into
the rate-efficiency of our codes via our non-malleable reduction.Comment: 26 pages, 4 figure
Transparent Error Correcting in a Computationally Bounded World
We construct uniquely decodable codes against channels which are computationally bounded. Our construction requires only a public-coin (transparent) setup. All prior work for such channels either required a setup with secret keys and states, could not achieve unique decoding, or got worse rates (for a given bound on codeword corruptions). On the other hand, our construction relies on a strong cryptographic hash function with security properties that we only instantiate in the random oracle model
On Basing Search SIVP on NP-Hardness
The possibility of basing cryptography on the minimal assumption NPBPP is at the very heart of complexity-theoretic cryptography. The closest we have gotten so far is lattice-based cryptography whose average-case security is based on the worst-case hardness of approximate shortest vector problems on integer lattices. The state-of-the-art is the construction of a one-way function (and collision-resistant hash function) based on the hardness of the -approximate shortest independent vector problem .
Although SIVP is NP-hard in its exact version, Guruswami et al (CCC 2004) showed that is in NPcoAM and thus unlikely to be NP-hard. Indeed, any language that can be reduced to (under general probabilistic polynomial-time adaptive reductions) is in AMcoAM by the results of Peikert and Vaikuntanathan (CRYPTO 2008) and Mahmoody and Xiao (CCC 2010). However, none of these results apply to reductions to search problems, still leaving open a ray of hope: can NP be reduced to solving search SIVP with approximation factor ?
We eliminate such possibility, by showing that any language that can be reduced to solving search with any approximation factor lies in AM intersect coAM. As a side product, we show that any language that can be reduced to discrete Gaussian sampling with parameter lies in AM intersect coAM
Dimension-Preserving Reductions from LWE to LWR
The Learning with Rounding (LWR) problem was first introduced by Banerjee, Peikert, and Rosen (Eurocrypt 2012) as a \emph{derandomized} form of the standard Learning with Errors (LWE) problem. The original motivation of LWR was as a building block for constructing efficient, low-depth pseudorandom functions on lattices. It has since been used to construct reusable computational extractors, lossy trapdoor functions, and deterministic encryption.
In this work we show two (incomparable) dimension-preserving reductions from LWE to LWR in the case of a \emph{polynomial-size modulus}. Prior works either required a superpolynomial modulus , or lost at least a factor in the dimension of the reduction. A direct consequence of our improved reductions is an improvement in parameters (i.e. security and efficiency) for each of the known applications of poly-modulus LWR.
Our results directly generalize to the ring setting. Indeed, our formal analysis is performed over ``module lattices,\u27\u27 as defined by Langlois and Stehlé (DCC 2015), which generalize both the general lattice setting of LWE and the ideal lattice setting of RLWE as the single notion M-LWE. We hope that taking this broader perspective will lead to further insights of independent interest
From Laconic Zero-Knowledge to Public-Key Cryptography
Since its inception, public-key encryption (PKE) has been one of the main cornerstones of cryptography. A central goal in cryptographic research is to understand the foundations of public-key encryption and in particular, base its existence on a natural and generic complexity-theoretic assumption. An intriguing candidate for such an assumption is the existence of a cryptographically hard language in the intersection of NP and SZK.
In this work we prove that public-key encryption can be based on the foregoing assumption, as long as the (honest) prover in the zero-knowledge protocol is efficient and laconic. That is, messages that the prover sends should be efficiently computable (given the NP witness) and short (i.e., of sufficiently sub-logarithmic length). Actually, our result is stronger and only requires the protocol to be zero-knowledge for an honest-verifier and sound against computationally bounded cheating provers.
Languages in NP with such laconic zero-knowledge protocols are known from a variety of computational assumptions (e.g., Quadratic Residuocity, Decisional Diffie-Hellman, Learning with Errors, etc.). Thus, our main result can also be viewed as giving a unifying framework for constructing PKE which, in particular, captures many of the assumptions that were already known to yield PKE.
We also show several extensions of our result. First, that a certain weakening of our assumption on laconic zero-knowledge is actually equivalent to PKE, thereby giving a complexity-theoretic characterization of PKE. Second, a mild strengthening of our assumption also yields a (2-message) oblivious transfer protocol