1,385 research outputs found
Average-Case Complexity
We survey the average-case complexity of problems in NP.
We discuss various notions of good-on-average algorithms, and present
completeness results due to Impagliazzo and Levin. Such completeness results
establish the fact that if a certain specific (but somewhat artificial) NP
problem is easy-on-average with respect to the uniform distribution, then all
problems in NP are easy-on-average with respect to all samplable distributions.
Applying the theory to natural distributional problems remain an outstanding
open question. We review some natural distributional problems whose
average-case complexity is of particular interest and that do not yet fit into
this theory.
A major open question whether the existence of hard-on-average problems in NP
can be based on the PNP assumption or on related worst-case assumptions.
We review negative results showing that certain proof techniques cannot prove
such a result. While the relation between worst-case and average-case
complexity for general NP problems remains open, there has been progress in
understanding the relation between different ``degrees'' of average-case
complexity. We discuss some of these ``hardness amplification'' results
On the Commitment Capacity of Unfair Noisy Channels
Noisy channels are a valuable resource from a cryptographic point of view.
They can be used for exchanging secret-keys as well as realizing other
cryptographic primitives such as commitment and oblivious transfer. To be
really useful, noisy channels have to be consider in the scenario where a
cheating party has some degree of control over the channel characteristics.
Damg\r{a}rd et al. (EUROCRYPT 1999) proposed a more realistic model where such
level of control is permitted to an adversary, the so called unfair noisy
channels, and proved that they can be used to obtain commitment and oblivious
transfer protocols. Given that noisy channels are a precious resource for
cryptographic purposes, one important question is determining the optimal rate
in which they can be used. The commitment capacity has already been determined
for the cases of discrete memoryless channels and Gaussian channels. In this
work we address the problem of determining the commitment capacity of unfair
noisy channels. We compute a single-letter characterization of the commitment
capacity of unfair noisy channels. In the case where an adversary has no
control over the channel (the fair case) our capacity reduces to the well-known
capacity of a discrete memoryless binary symmetric channel
Separating Two-Round Secure Computation From Oblivious Transfer
We consider the question of minimizing the round complexity of protocols for secure multiparty computation (MPC) with security against an arbitrary number of semi-honest parties. Very recently, Garg and Srinivasan (Eurocrypt 2018) and Benhamouda and Lin (Eurocrypt 2018) constructed such 2-round MPC protocols from minimal assumptions. This was done by showing a round preserving reduction to the task of secure 2-party computation of the oblivious transfer functionality (OT). These constructions made a novel non-black-box use of the underlying OT protocol. The question remained whether this can be done by only making black-box use of 2-round OT. This is of theoretical and potentially also practical value as black-box use of primitives tends to lead to more efficient constructions.
Our main result proves that such a black-box construction is impossible, namely that non-black-box use of OT is necessary. As a corollary, a similar separation holds when starting with any 2-party functionality other than OT.
As a secondary contribution, we prove several additional results that further clarify the landscape of black-box MPC with minimal interaction. In particular, we complement the separation from 2-party functionalities by presenting a complete 4-party functionality, give evidence for the difficulty of ruling out a complete 3-party functionality and for the difficulty of ruling out black-box constructions of 3-round MPC from 2-round OT, and separate a relaxed "non-compact" variant of 2-party homomorphic secret sharing from 2-round OT
Fine-Grained Cryptography
Fine-grained cryptographic primitives are ones that are secure against adversaries with an a-priori bounded polynomial amount of resources (time, space or parallel-time), where the honest algorithms use less resources than the adversaries they are designed to fool. Such primitives were previously studied in the context of time-bounded adversaries (Merkle, CACM 1978), space-bounded adversaries (Cachin and Maurer, CRYPTO 1997) and parallel-time-bounded adversaries (HÄstad, IPL 1987). Our goal is come up with fine-grained primitives (in the setting of parallel-time-bounded adversaries) and to show unconditional security of these constructions when possible, or base security on widely believed separation of worst-case complexity classes. We show:
1. NCÂč-cryptography: Under the assumption that Open image in new window, we construct one-way functions, pseudo-random generators (with sub-linear stretch), collision-resistant hash functions and most importantly, public-key encryption schemes, all computable in NCÂč and secure against all NCÂč circuits. Our results rely heavily on the notion of randomized encodings pioneered by Applebaum, Ishai and Kushilevitz, and crucially, make non-black-box use of randomized encodings for logspace classes.
2. ACâ°-cryptography: We construct (unconditionally secure) pseudo-random generators with arbitrary polynomial stretch, weak pseudo-random functions, secret-key encryption and perhaps most interestingly, collision-resistant hash functions, computable in ACâ° and secure against all ACâ° circuits. Previously, one-way permutations and pseudo-random generators (with linear stretch) computable in ACâ° and secure against ACâ° circuits were known from the works of HĂ„stad and Braverman.United States. Defense Advanced Research Projects Agency (Contract W911NF-15-C-0226)United States. Army Research Office (Contract W911NF-15-C-0226
On Oblivious Amplification of Coin-Tossing Protocols
We consider the problem of amplifying two-party coin-tossing protocols: given a protocol where it is possible to bias the common output by at most ?, we aim to obtain a new protocol where the output can be biased by at most ?* < ?. We rule out the existence of a natural type of amplifiers called oblivious amplifiers for every ?* < ?. Such amplifiers ignore the way that the underlying ?-bias protocol works and can only invoke an oracle that provides ?-bias bits.
We provide two proofs of this impossibility. The first is by a reduction to the impossibility of deterministic randomness extraction from Santha-Vazirani sources. The second is a direct proof that is more general and also rules outs certain types of asymmetric amplification. In addition, it gives yet another proof for the Santha-Vazirani impossibility
Classical Cryptographic Protocols in a Quantum World
Cryptographic protocols, such as protocols for secure function evaluation
(SFE), have played a crucial role in the development of modern cryptography.
The extensive theory of these protocols, however, deals almost exclusively with
classical attackers. If we accept that quantum information processing is the
most realistic model of physically feasible computation, then we must ask: what
classical protocols remain secure against quantum attackers?
Our main contribution is showing the existence of classical two-party
protocols for the secure evaluation of any polynomial-time function under
reasonable computational assumptions (for example, it suffices that the
learning with errors problem be hard for quantum polynomial time). Our result
shows that the basic two-party feasibility picture from classical cryptography
remains unchanged in a quantum world.Comment: Full version of an old paper in Crypto'11. Invited to IJQI. This is
authors' copy with different formattin
Naturally Rehearsing Passwords
We introduce quantitative usability and security models to guide the design
of password management schemes --- systematic strategies to help users create
and remember multiple passwords. In the same way that security proofs in
cryptography are based on complexity-theoretic assumptions (e.g., hardness of
factoring and discrete logarithm), we quantify usability by introducing
usability assumptions. In particular, password management relies on assumptions
about human memory, e.g., that a user who follows a particular rehearsal
schedule will successfully maintain the corresponding memory. These assumptions
are informed by research in cognitive science and validated through empirical
studies. Given rehearsal requirements and a user's visitation schedule for each
account, we use the total number of extra rehearsals that the user would have
to do to remember all of his passwords as a measure of the usability of the
password scheme. Our usability model leads us to a key observation: password
reuse benefits users not only by reducing the number of passwords that the user
has to memorize, but more importantly by increasing the natural rehearsal rate
for each password. We also present a security model which accounts for the
complexity of password management with multiple accounts and associated
threats, including online, offline, and plaintext password leak attacks.
Observing that current password management schemes are either insecure or
unusable, we present Shared Cues--- a new scheme in which the underlying secret
is strategically shared across accounts to ensure that most rehearsal
requirements are satisfied naturally while simultaneously providing strong
security. The construction uses the Chinese Remainder Theorem to achieve these
competing goals
Commitment and Oblivious Transfer in the Bounded Storage Model with Errors
The bounded storage model restricts the memory of an adversary in a
cryptographic protocol, rather than restricting its computational power, making
information theoretically secure protocols feasible. We present the first
protocols for commitment and oblivious transfer in the bounded storage model
with errors, i.e., the model where the public random sources available to the
two parties are not exactly the same, but instead are only required to have a
small Hamming distance between themselves. Commitment and oblivious transfer
protocols were known previously only for the error-free variant of the bounded
storage model, which is harder to realize
Sum of squares lower bounds for refuting any CSP
Let be a nontrivial -ary predicate. Consider a
random instance of the constraint satisfaction problem on
variables with constraints, each being applied to randomly
chosen literals. Provided the constraint density satisfies , such
an instance is unsatisfiable with high probability. The \emph{refutation}
problem is to efficiently find a proof of unsatisfiability.
We show that whenever the predicate supports a -\emph{wise uniform}
probability distribution on its satisfying assignments, the sum of squares
(SOS) algorithm of degree
(which runs in time ) \emph{cannot} refute a random instance of
. In particular, the polynomial-time SOS algorithm requires
constraints to refute random instances of
CSP when supports a -wise uniform distribution on its satisfying
assignments. Together with recent work of Lee et al. [LRS15], our result also
implies that \emph{any} polynomial-size semidefinite programming relaxation for
refutation requires at least constraints.
Our results (which also extend with no change to CSPs over larger alphabets)
subsume all previously known lower bounds for semialgebraic refutation of
random CSPs. For every constraint predicate~, they give a three-way hardness
tradeoff between the density of constraints, the SOS degree (hence running
time), and the strength of the refutation. By recent algorithmic results of
Allen et al. [AOW15] and Raghavendra et al. [RRS16], this full three-way
tradeoff is \emph{tight}, up to lower-order factors.Comment: 39 pages, 1 figur
- âŠ