20 research outputs found
Some Facets of Complexity Theory and Cryptography: A Five-Lectures Tutorial
In this tutorial, selected topics of cryptology and of computational
complexity theory are presented. We give a brief overview of the history and
the foundations of classical cryptography, and then move on to modern
public-key cryptography. Particular attention is paid to cryptographic
protocols and the problem of constructing the key components of such protocols
such as one-way functions. A function is one-way if it is easy to compute, but
hard to invert. We discuss the notion of one-way functions both in a
cryptographic and in a complexity-theoretic setting. We also consider
interactive proof systems and present some interesting zero-knowledge
protocols. In a zero-knowledge protocol one party can convince the other party
of knowing some secret information without disclosing any bit of this
information. Motivated by these protocols, we survey some complexity-theoretic
results on interactive proof systems and related complexity classes.Comment: 57 pages, 17 figures, Lecture Notes for the 11th Jyvaskyla Summer
Schoo
Average-Case Complexity
We survey the average-case complexity of problems in NP.
We discuss various notions of good-on-average algorithms, and present
completeness results due to Impagliazzo and Levin. Such completeness results
establish the fact that if a certain specific (but somewhat artificial) NP
problem is easy-on-average with respect to the uniform distribution, then all
problems in NP are easy-on-average with respect to all samplable distributions.
Applying the theory to natural distributional problems remain an outstanding
open question. We review some natural distributional problems whose
average-case complexity is of particular interest and that do not yet fit into
this theory.
A major open question whether the existence of hard-on-average problems in NP
can be based on the PNP assumption or on related worst-case assumptions.
We review negative results showing that certain proof techniques cannot prove
such a result. While the relation between worst-case and average-case
complexity for general NP problems remains open, there has been progress in
understanding the relation between different ``degrees'' of average-case
complexity. We discuss some of these ``hardness amplification'' results
Computational Indistinguishability between Quantum States and Its Cryptographic Application
We introduce a computational problem of distinguishing between two specific
quantum states as a new cryptographic problem to design a quantum cryptographic
scheme that is "secure" against any polynomial-time quantum adversary. Our
problem, QSCDff, is to distinguish between two types of random coset states
with a hidden permutation over the symmetric group of finite degree. This
naturally generalizes the commonly-used distinction problem between two
probability distributions in computational cryptography. As our major
contribution, we show that QSCDff has three properties of cryptographic
interest: (i) QSCDff has a trapdoor; (ii) the average-case hardness of QSCDff
coincides with its worst-case hardness; and (iii) QSCDff is computationally at
least as hard as the graph automorphism problem in the worst case. These
cryptographic properties enable us to construct a quantum public-key
cryptosystem, which is likely to withstand any chosen plaintext attack of a
polynomial-time quantum adversary. We further discuss a generalization of
QSCDff, called QSCDcyc, and introduce a multi-bit encryption scheme that relies
on similar cryptographic properties of QSCDcyc.Comment: 24 pages, 2 figures. We improved presentation, and added more detail
proofs and follow-up of recent wor
On the hardness of the shortest vector problem
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.Includes bibliographical references (p. 77-84).An n-dimensional lattice is the set of all integral linear combinations of n linearly independent vectors in Rm. One of the most studied algorithmic problems on lattices is the shortest vector problem (SVP): given a lattice, find the shortest non-zero vector in it. We prove that the shortest vector problem is NP-hard (for randomized reductions) to approximate within some constant factor greater than 1 in any 1, norm (p >\=1). In particular, we prove the NP-hardness of approximating SVP in the Euclidean norm 12 within any factor less than [square root of]2. The same NP-hardness results hold for deterministic non-uniform reductions. A deterministic uniform reduction is also given under a reasonable number theoretic conjecture concerning the distribution of smooth numbers. In proving the NP-hardness of SVP we develop a number of technical tools that might be of independent interest. In particular, a lattice packing is constructed with the property that the number of unit spheres contained in an n-dimensional ball of radius greater than 1 + [square root of] 2 grows exponentially in n, and a new constructive version of Sauer's lemma (a combinatorial result somehow related to the notion of VC-dimension) is presented, considerably simplifying all previously known constructions.by Daniele Micciancio.Ph.D
Cryptography based on the Hardness of Decoding
This thesis provides progress in the fields of for lattice and coding based cryptography. The first contribution consists of constructions of IND-CCA2 secure public key cryptosystems from both the McEliece and the low noise learning parity with noise assumption. The second contribution is a novel instantiation of the lattice-based learning with errors problem which uses uniform errors
A survey of the mathematics of cryptology
Herein I cover the basics of cryptology and the mathematical techniques used in the field. Aside from an overview of cryptology the text provides an in-depth look at block cipher algorithms and the techniques of cryptanalysis applied to block ciphers. The text also includes details of knapsack cryptosystems and pseudo-random number generators
The Hunting of the SNARK
The existence of succinct non-interactive arguments for NP (i.e.,
non-interactive computationally-sound proofs where the verifier\u27s
work is essentially independent of the complexity of the NP
nondeterministic verifier) has been an intriguing question for the
past two decades. Other than CS proofs in the random oracle model
[Micali, FOCS \u2794], the only existing candidate construction is
based on an elaborate assumption that is tailored to a specific
protocol [Di Crescenzo and Lipmaa, CiE \u2708].
We formulate a general and relatively natural notion of an
\emph{extractable collision-resistant hash function (ECRH)} and show
that, if ECRHs exist, then a modified version of Di Crescenzo and
Lipmaa\u27s protocol is a succinct non-interactive argument for
NP. Furthermore, the modified protocol is actually a succinct
non-interactive \emph{adaptive argument of knowledge (SNARK).} We
then propose several candidate constructions for ECRHs and
relaxations thereof.
We demonstrate the applicability of SNARKs to various forms of delegation of computation, to succinct non-interactive zero knowledge arguments, and to succinct two-party secure computation. Finally, we show that SNARKs essentially imply the existence of ECRHs, thus demonstrating the necessity of the assumption.
Going beyond \ECRHs, we formulate the notion of {\em extractable
one-way functions (\EOWFs)}. Assuming the existence of a natural
variant of \EOWFs, we construct a -message
selective-opening-attack secure commitment scheme and a 3-round
zero-knowledge argument of knowledge. Furthermore, if the \EOWFs are
concurrently extractable, the 3-round zero-knowledge protocol is also
concurrent zero-knowledge.
Our constructions circumvent previous black-box impossibility
results regarding these protocols by relying on \EOWFs as the non-black-box component in the security reductions