9 research outputs found
Tight Time-Memory Trade-offs for Symmetric Encryption
Concrete security proofs give upper bounds on the attacker\u27s advantage as a function of its time/query complexity. Cryptanalysis suggests however that other resource limitations - most notably, the attacker\u27s memory - could make the achievable advantage smaller, and thus these proven bounds too pessimistic. Yet, handling memory limitations has eluded existing security proofs.
This paper initiates the study of time-memory trade-offs for basic symmetric cryptography. We show that schemes like counter-mode encryption, which are affected by the Birthday Bound, become more secure (in terms of time complexity) as the attacker\u27s memory is reduced.
One key step of this work is a generalization of the Switching Lemma: For adversaries with bits of memory issuing distinct queries, we prove an -to- bit random function indistinguishable from a permutation as long as . This result assumes a combinatorial conjecture, which we discuss, and implies right away trade-offs for deterministic, stateful versions of CTR and OFB encryption.
We also show an unconditional time-memory trade-off for the security of randomized CTR based on a secure PRF. Via the aforementioned conjecture, we extend the result to assuming a PRP instead, assuming only one-block messages are encrypted.
Our results solely rely on standard PRF/PRP security of an underlying block cipher. We frame the core of our proofs within a general framework of indistinguishability for streaming algorithms which may be of independent interest
An Information-Theoretic Proof of the Streaming Switching Lemma for Symmetric Encryption
Motivated by a fundamental paradigm in cryptography, we consider a recent
variant of the classic problem of bounding the distinguishing advantage between
a random function and a random permutation. Specifically, we consider the
problem of deciding whether a sequence of values was sampled uniformly with
or without replacement from , where the decision is made by a streaming
algorithm restricted to using at most bits of internal memory. In this
work, the distinguishing advantage of such an algorithm is measured by the KL
divergence between the distributions of its output as induced under the two
cases. We show that for any the distinguishing advantage is
upper bounded by , and even by when
for any constant where it is nearly
tight with respect to the KL divergence
Memory-Sample Lower Bounds for Learning Parity with Noise
In this work, we show, for the well-studied problem of learning parity under
noise, where a learner tries to learn from a
stream of random linear equations over that are correct with
probability and flipped with probability
, that any learning algorithm requires either a memory
of size or an exponential number of samples.
In fact, we study memory-sample lower bounds for a large class of learning
problems, as characterized by [GRT'18], when the samples are noisy. A matrix
corresponds to the following learning
problem with error parameter : an unknown element is
chosen uniformly at random. A learner tries to learn from a stream of
samples, , where for every , is
chosen uniformly at random and with probability
and with probability
(). Assume that are such that any
submatrix of of at least rows and at least columns, has a bias of at most . We show that any learning
algorithm for the learning problem corresponding to , with error, requires
either a memory of size at least , or at least samples. In particular, this shows that
for a large class of learning problems, same as those in [GRT'18], any learning
algorithm requires either a memory of size at least or an exponential number of noisy
samples.
Our proof is based on adapting the arguments in [Raz'17,GRT'18] to the noisy
case.Comment: 19 pages. To appear in RANDOM 2021. arXiv admin note: substantial
text overlap with arXiv:1708.0263
The Memory-Tightness of Authenticated Encryption
This paper initiates the study of the provable security of authenticated encryption (AE) in the memory-bounded setting. Recent works – Tessaro and Thiruvengadam (TCC \u2718), Jaeger and Tessaro (EUROCRYPT \u2719), and Dinur (EUROCRYPT \u2720) – focus on confidentiality, and look at schemes for which trade-offs between the attacker\u27s memory and its data complexity are inherent. Here, we ask whether these results and techniques can be lifted to the full AE setting, which additionally asks for integrity.
We show both positive and negative results. On the positive side, we provide tight memory-sensitive bounds for the security of GCM and its generalization, CAU (Bellare and Tackmann, CRYPTO \u2716). Our bounds apply to a restricted case of AE security which abstracts the deployment within protocols like TLS, and rely on a new memory-tight reduction to corresponding restricted notions of confidentiality and integrity. In particular, our reduction uses an amount of memory which linearly depends on that of the given adversary, as opposed to only imposing a constant memory overhead as in earlier works (Auerbach et al., CRYPTO \u2717).
On the negative side, we show that a large class of black-box reductions cannot generically lift confidentiality and integrity security to a joint definition of AE security in a memory-tight way
Verifiable Capacity-bound Functions: A New Primitive from Kolmogorov Complexity (Revisiting space-based security in the adaptive setting)
We initiate the study of verifiable capacity-bound function (VCBF). The main VCBF property imposes a strict lower bound on the number of bits read from memory during evaluation (referred to as minimum capacity). No adversary, even with unbounded computational resources, should produce an output without spending this minimum memory capacity. Moreover, a VCBF allows for an efficient public verification process: Given a proof-of-correctness, checking the validity of the output takes significantly fewer memory resources, sublinear in the target minimum capacity. Finally, it achieves soundness, i.e., no computationally bounded adversary can produce a proof that passes verification for a false output. With these properties, we believe a VCBF can be viewed as a “space” analog of a verifiable delay function. We then propose the first VCBF construction relying on evaluating a degree- polynomial from at a random point. We leverage ideas from Kolmogorov complexity to prove that sampling from a large set (i.e., for high-enough ) ensures that evaluation must entail reading a number of bits proportional to the size of its coefficients. Moreover, our construction benefits from existing verifiable polynomial evaluation schemes to realize our efficient verification requirements. In practice, for a field of order our VCBF achieves minimum capacity, whereas verification requires just . The minimum capacity of our VCBF construction holds against adversaries that perform a constant number of random memory accesses. This poses the natural question of whether a VCBF with high minimum capacity guarantees exists when dealing with adversaries that perform non-constant (e.g., polynomial) number of random accesses
Super-Linear Time-Memory Trade-Offs for Symmetric Encryption
We build symmetric encryption schemes from a pseudorandom
function/permutation with domain size which have very high
security -- in terms of the amount of messages they can securely
encrypt -- assuming the adversary has bits of memory. We aim
to minimize the number of calls we make to the underlying
primitive to achieve a certain , or equivalently, to maximize the
achievable for a given . We target in
particular , in contrast to recent works (Jaeger and
Tessaro, EUROCRYPT \u2719; Dinur, EUROCRYPT \u2720) which aim to beat the
birthday barrier with one call when .
Our first result gives new and explicit bounds for the
Sample-then-Extract paradigm by Tessaro and Thiruvengadam (TCC
\u2718). We show instantiations for which .
If , Thiruvengadam and Tessaro\u27s weaker bounds
only guarantee when . In contrast, here,
we show this is true already for .
We also consider a scheme by Bellare, Goldreich and Krawczyk (CRYPTO
\u2799) which evaluates the primitive on independent random
strings, and masks the message with the XOR of the outputs. Here, we
show , using new combinatorial bounds
on the list-decodability of XOR codes which are of independent
interest. We also study best-possible attacks against this
construction
Hiding in Plain Sight: Memory-tight Proofs via Randomness Programming
This paper continues the study of memory-tight reductions (Auerbach et al, CRYPTO \u2717). These are reductions that only incur minimal memory costs over those of the original adversary, allowing precise security statements for memory-bounded adversaries (under appropriate assumptions expressed in terms of adversary time and memory usage). Despite its importance, only a few techniques to achieve memory-tightness are known and impossibility results in prior works show that even basic, textbook reductions cannot be made memory-tight.
This paper introduces a new class of memory-tight reductions which leverage random strings in the interaction with the adversary to hide state information, thus shifting the memory costs to the adversary.
We exhibit this technique with several examples. We give memory-tight proofs for digital signatures allowing many forgery attempts when considering randomized message distributions or probabilistic RSA-FDH signatures specifically. We prove security of the authenticated encryption scheme Encrypt-then-PRF with a memory-tight reduction to the underlying encryption scheme. By considering specific schemes or restricted definitions we avoid generic impossibility results of Auerbach et al. (CRYPTO \u2717) and Ghoshal et al. (CRYPTO \u2720).
As a further case study, we consider the textbook equivalence of CCA-security for public-key encryption for one or multiple encryption queries. We show two qualitatively different memory-tight versions of this result, depending on the considered notion of CCA security
Memory-Tight Multi-Challenge Security of Public-Key Encryption
We give the first examples of public-key encryption schemes which can be proven to achieve multi-challenge, multi-user CCA security via reductions that are tight in time, advantage, and memory. Our constructions are obtained by applying the KEM-DEM paradigm to variants of Hashed ElGamal and the Fujisaki-Okamoto transformation that are augmented by adding uniformly random strings to their ciphertexts and/or keys.
The reductions carefully combine recent proof techniques introduced by Bhattacharyya’20 and Ghoshal- Ghosal-Jaeger-Tessaro’22. Our proofs for the augmented ECIES version of Hashed-ElGamal make use of a new computational Diffie-Hellman assumption wherein the adversary is given access to a pairing to a random group, which we believe may be of independent interest