20 research outputs found
On Basing Search SIVP on NP-Hardness
The possibility of basing cryptography on the minimal assumption NPBPP is at the very heart of complexity-theoretic cryptography. The closest we have gotten so far is lattice-based cryptography whose average-case security is based on the worst-case hardness of approximate shortest vector problems on integer lattices. The state-of-the-art is the construction of a one-way function (and collision-resistant hash function) based on the hardness of the -approximate shortest independent vector problem .
Although SIVP is NP-hard in its exact version, Guruswami et al (CCC 2004) showed that is in NPcoAM and thus unlikely to be NP-hard. Indeed, any language that can be reduced to (under general probabilistic polynomial-time adaptive reductions) is in AMcoAM by the results of Peikert and Vaikuntanathan (CRYPTO 2008) and Mahmoody and Xiao (CCC 2010). However, none of these results apply to reductions to search problems, still leaving open a ray of hope: can NP be reduced to solving search SIVP with approximation factor ?
We eliminate such possibility, by showing that any language that can be reduced to solving search with any approximation factor lies in AM intersect coAM. As a side product, we show that any language that can be reduced to discrete Gaussian sampling with parameter lies in AM intersect coAM
Average-Case Hardness of NP and PH from Worst-Case Fine-Grained Assumptions
What is a minimal worst-case complexity assumption that implies non-trivial average-case hardness of NP or PH? This question is well motivated by the theory of fine-grained average-case complexity and fine-grained cryptography. In this paper, we show that several standard worst-case complexity assumptions are sufficient to imply non-trivial average-case hardness of NP or PH:
- NTIME[n] cannot be solved in quasi-linear time on average if UP ? ? DTIME[2^{O?(?n)}].
- ??TIME[n] cannot be solved in quasi-linear time on average if ?_kSAT cannot be solved in time 2^{O?(?n)} for some constant k. Previously, it was not known if even average-case hardness of ??SAT implies the average-case hardness of ??TIME[n].
- Under the Exponential-Time Hypothesis (ETH), there is no average-case n^{1+?}-time algorithm for NTIME[n] whose running time can be estimated in time n^{1+?} for some constant ? > 0.
Our results are given by generalizing the non-black-box worst-case-to-average-case connections presented by Hirahara (STOC 2021) to the settings of fine-grained complexity. To do so, we construct quite efficient complexity-theoretic pseudorandom generators under the assumption that the nondeterministic linear time is easy on average, which may be of independent interest
Asymptotically Efficient Lattice-Based Digital Signatures
We present a general framework that converts certain types of linear collision-resistant hash
functions into one-time signatures. Our generic construction can be instantiated based on both
general and ideal (e.g. cyclic) lattices, and the resulting signature schemes are provably secure
based on the worst-case hardness of approximating the shortest vector (and other standard
lattice problems) in the corresponding class of lattices to within a polynomial factor. When
instantiated with ideal lattices, the time complexity of the signing and verification algorithms,
as well as key and signature size is almost linear (up to poly-logarithmic factors) in the dimension
n of the underlying lattice. Since no sub-exponential (in n) time algorithm is known to solve
lattice problems in the worst case, even when restricted to ideal lattices, our construction gives
a digital signature scheme with an essentially optimal performance/security trade-off
Information- and Coding-Theoretic Analysis of the RLWE Channel
Several cryptosystems based on the \emph{Ring Learning with Errors} (RLWE)
problem have been proposed within the NIST post-quantum cryptography
standardization process, e.g. NewHope. Furthermore, there are systems like
Kyber which are based on the closely related MLWE assumption. Both previously
mentioned schemes feature a non-zero decryption failure rate (DFR). The
combination of encryption and decryption for these kinds of algorithms can be
interpreted as data transmission over noisy channels. To the best of our
knowledge this paper is the first work that analyzes the capacity of this
channel. We show how to modify the encryption schemes such that the input
alphabets of the corresponding channels are increased. In particular, we
present lower bounds on their capacities which show that the transmission rate
can be significantly increased compared to standard proposals in the
literature. Furthermore, under the common assumption of stochastically
independent coefficient failures, we give lower bounds on achievable rates
based on both the Gilbert-Varshamov bound and concrete code constructions using
BCH codes. By means of our constructions, we can either increase the total
bitrate (by a factor of for Kyber and by factor of for NewHope)
while guaranteeing the same \emph{decryption failure rate} (DFR). Moreover, for
the same bitrate, we can significantly reduce the DFR for all schemes
considered in this work (e.g., for NewHope from to ).Comment: 13 pages, 4 figures, 3 table
Investigating Lattice-Based Cryptography
Cryptography is important for data confidentiality, integrity, and authentication. Public key cryptosystems allow for the encryption and decryption of data using two different keys, one that is public and one that is private. This is beneficial because there is no need to securely distribute a secret key. However, the development of quantum computers implies that many public-key cryptosystems for which security depends on the hardness of solving math problems will no longer be secure. It is important to develop systems that have harder math problems which cannot be solved by a quantum computer.
In this project, two public-key cryptosystems which are candidates for quantum-resistance were implemented using Rust. The security of the McEliece system is based on the hardness of decoding a linear code which is an NP-hard problem, and the security of the Regev system is based off of the Learning with Errors problem which is as hard as several worst-case lattice problems [1], [2]. Tests were run to verify the correctness of the implemented systems and experiments were run to analyze the cost of replacing pre-quantum systems with post- quantum systems
Worst-Case to Average-Case Reductions for the SIS Problem: Tightness and Security
We present a framework for evaluating the concrete security assurances of cryptographic constructions given by the worst-case SIVP_γ to average-case SIS_{n,m,q,β} reductions. As part of this analysis, we present the tightness gaps for three worst-case SIVP_γ to average-case SIS_{n,m,q,β} reductions. We also analyze the hardness of worst-case SIVP_γ instances.
We apply our methodology to two SIS-based signature schemes, and compute the security guarantees that these systems get from reductions to worst-case SIVP_γ. We find that most of the presented reductions do not apply to the chosen parameter sets for the signature schemes. We propose modifications to the schemes to make the reductions applicable, and find that the worst-case security assurances of the (modified) signature schemes are, for both signature schemes, significantly lower than the amount of security previously claimed
Concrete Analysis of Approximate Ideal-SIVP to Decision Ring-LWE Reduction
A seminal 2013 paper by Lyubashevsky, Peikert, and Regev proposed basing post-quantum cryptography on ideal lattices and supported this proposal by giving
a polynomial-time security reduction from the approximate Shortest Independent Vectors Problem (SIVP) to the Decision Learning With Errors (DLWE)
problem in ideal lattices. We
give a concrete analysis of this multi-step reduction. We find that the tightness gap in the reduction is so great as to vitiate any meaningful security guarantee,
and we find reasons to doubt the feasibility in the foreseeable future of the quantum part of the reduction.
In addition, when we make the reduction concrete it appears that the approximation factor in the SIVP problem is far larger than expected, a circumstance that causes
the corresponding approximate-SIVP problem most likely not to be hard for proposed cryptosystem parameters. We also discuss implications for systems such as
Kyber and SABER that are based on module-DLWE