74 research outputs found
Three\u27s Compromised Too: Circular Insecurity for Any Cycle Length from (Ring-)LWE
Informally, a public-key encryption scheme is
\emph{-circular secure} if a cycle of~ encrypted secret keys
(\pkcenc_{\pk_{1}}(\sk_{2}), \pkcenc_{\pk_{2}}(\sk_{3}), \ldots,
\pkcenc_{\pk_{k}}(\sk_{1}))
is indistinguishable from encryptions of zeros. Circular security has
applications in a wide variety of settings, ranging from security of
symbolic protocols to fully homomorphic encryption. A fundamental
question is whether standard security notions like IND-CPA/CCA imply
-circular security.
For the case , several works over the past years have constructed
counterexamples---i.e., schemes that are CPA or even CCA secure but
not -circular secure---under a variety of well-studied assumptions
(SXDH, decision linear, and LWE). However, for the only known
counterexamples are based on strong general-purpose obfuscation
assumptions.
In this work we construct -circular security counterexamples for
any based on (ring-)LWE. Specifically:
\begin{itemize}
\item for any constant , we construct a counterexample based on
-dimensional (plain) LWE for \poly(n) approximation factors;
\item for any k=\poly(\lambda), we construct one based on degree-
ring-LWE for at most subexponential factors.
\end{itemize}
Moreover, both schemes are -circular insecure for
.
Notably, our ring-LWE construction does not immediately translate to
an LWE-based one, because matrix multiplication is not commutative. To
overcome this, we introduce a new ``tensored\u27\u27 variant of LWE which
provides the desired commutativity, and which we prove is actually
equivalent to plain LWE
Small-Box Cryptography
One of the ultimate goals of symmetric-key cryptography is to find a rigorous theoretical framework for building block ciphers from small components, such as cryptographic S-boxes, and then argue why iterating such small components for sufficiently many rounds would yield a secure construction. Unfortunately, a fundamental obstacle towards reaching this goal comes from the fact that traditional security proofs cannot get security beyond 2^{-n}, where n is the size of the corresponding component.
As a result, prior provably secure approaches - which we call "big-box cryptography" - always made n larger than the security parameter, which led to several problems: (a) the design was too coarse to really explain practical constructions, as (arguably) the most interesting design choices happening when instantiating such "big-boxes" were completely abstracted out; (b) the theoretically predicted number of rounds for the security of this approach was always dramatically smaller than in reality, where the "big-box" building block could not be made as ideal as required by the proof. For example, Even-Mansour (and, more generally, key-alternating) ciphers completely ignored the substitution-permutation network (SPN) paradigm which is at the heart of most real-world implementations of such ciphers.
In this work, we introduce a novel paradigm for justifying the security of existing block ciphers, which we call small-box cryptography. Unlike the "big-box" paradigm, it allows one to go much deeper inside the existing block cipher constructions, by only idealizing a small (and, hence, realistic!) building block of very small size n, such as an 8-to-32-bit S-box. It then introduces a clean and rigorous mixture of proofs and hardness conjectures which allow one to lift traditional, and seemingly meaningless, "at most 2^{-n}" security proofs for reduced-round idealized variants of the existing block ciphers, into meaningful, full-round security justifications of the actual ciphers used in the real world.
We then apply our framework to the analysis of SPN ciphers (e.g, generalizations of AES), getting quite reasonable and plausible concrete hardness estimates for the resulting ciphers. We also apply our framework to the design of stream ciphers. Here, however, we focus on the simplicity of the resulting construction, for which we managed to find a direct "big-box"-style security justification, under a well studied and widely believed eXact Linear Parity with Noise (XLPN) assumption.
Overall, we hope that our work will initiate many follow-up results in the area of small-box cryptography
One-Message Zero Knowledge and Non-Malleable Commitments
We introduce a new notion of one-message zero-knowledge (1ZK) arguments that satisfy a weak soundness guarantee — the number of false statements that a polynomial-time non-uniform adversary can convince the verifier to accept is not much larger than the size of its non-uniform advice. The zero-knowledge guarantee is given by a simulator that runs in (mildly) super-polynomial time.
We construct such 1ZK arguments based on the notion of multi-collision-resistant keyless hash functions, recently introduced by Bitansky, Kalai, and Paneth (STOC 2018). Relying on the constructed
1ZK arguments, subexponentially-secure time-lock puzzles, and other standard assumptions, we construct one-message fully-concurrent non-malleable commitments. This is the first construction that is based on assumptions that do not already incorporate non-malleability, as well as the first based on (subexponentially) falsifiable assumptions
KDM Security for Identity-Based Encryption: Constructions and Separations
For encryption schemes, key dependent message (KDM) security requires that ciphertexts preserve secrecy even when the messages to be encrypted depend on the secret keys.
While KDM security has been extensively studied for public-key encryption (PKE), it receives much less attention in the setting of identity-based encryption (IBE). In this work, we focus on the KDM security for IBE. Our results are threefold.
We first propose a generic approach to transfer the KDM security results (both positive and negative) from PKE to IBE. At the heart of our approach is a neat structure-mirroring PKE-to-IBE transformation based on indistinguishability obfuscation and puncturable PRFs, which establishes a connection between PKE and IBE in general. However, the obtained results are restricted to selective-identity sense. We then concentrate on results in adaptive-identity sense.
On the positive side, we present two constructions that achieve KDM security in the adaptive-identity sense for the first time. One is built from identity-based hash proof system (IB-HPS) with homomorphic property, which indicates that the IBE schemes of Gentry (Eurocrypt 2006), Coron (DCC 2009), Chow et al. (CCS 2010) are actually KDM-secure in the single-key setting. The other is built from indistinguishability obfuscation and a new notion named puncturable unique signature, which is bounded KDM-secure in the single-key setting.
On the negative side, we separate CPA/CCA security from -circular security (which is a prototypical case of KDM security) for IBE by giving a counterexample based on differing-inputs obfuscation and a new notion named puncturable IBE. We further propose a general framework for generating -circular security counterexamples in identity-based setting,
which might be of independent interest
An efficient quantum parallel repetition theorem and applications
We prove a tight parallel repetition theorem for -message computationally-secure quantum interactive protocols between an efficient challenger and an efficient adversary. We also prove under plausible assumptions that the security of -message computationally secure protocols does not generally decrease under parallel repetition. These mirror the classical results of Bellare, Impagliazzo, and Naor [BIN97]. Finally, we prove that all quantum argument systems can be generically compiled to an equivalent -message argument system, mirroring the transformation for quantum proof systems [KW00, KKMV07].
As immediate applications, we show how to derive hardness amplification theorems for quantum bit commitment schemes (answering a question of Yan [Yan22]), EFI pairs (answering a question of Brakerski, Canetti, and Qian [BCQ23]), public-key quantum money schemes (answering a question of Aaronson and Christiano [AC13]), and quantum zero-knowledge argument systems. We also derive an XOR lemma [Yao82] for quantum predicates as a corollary
Properly Learning Decision Trees with Queries Is NP-Hard
We prove that it is NP-hard to properly PAC learn decision trees with
queries, resolving a longstanding open problem in learning theory (Bshouty
1993; Guijarro-Lavin-Raghavan 1999; Mehta-Raghavan 2002; Feldman 2016). While
there has been a long line of work, dating back to (Pitt-Valiant 1988),
establishing the hardness of properly learning decision trees from random
examples, the more challenging setting of query learners necessitates different
techniques and there were no previous lower bounds. En route to our main
result, we simplify and strengthen the best known lower bounds for a different
problem of Decision Tree Minimization (Zantema-Bodlaender 2000; Sieling 2003).
On a technical level, we introduce the notion of hardness distillation, which
we study for decision tree complexity but can be considered for any complexity
measure: for a function that requires large decision trees, we give a general
method for identifying a small set of inputs that is responsible for its
complexity. Our technique even rules out query learners that are allowed
constant error. This contrasts with existing lower bounds for the setting of
random examples which only hold for inverse-polynomial error.
Our result, taken together with a recent almost-polynomial time query
algorithm for properly learning decision trees under the uniform distribution
(Blanc-Lange-Qiao-Tan 2022), demonstrates the dramatic impact of distributional
assumptions on the problem.Comment: 41 pages, 10 figures, FOCS 202
List Decoding with Double Samplers
We develop the notion of "double samplers", first introduced by Dinur and
Kaufman [Proc. 58th FOCS, 2017], which are samplers with additional
combinatorial properties, and whose existence we prove using high dimensional
expanders.
We show how double samplers give a generic way of amplifying distance in a
way that enables efficient list-decoding. There are many error correcting code
constructions that achieve large distance by starting with a base code with
moderate distance, and then amplifying the distance using a sampler, e.g., the
ABNNR code construction [IEEE Trans. Inform. Theory, 38(2):509--516, 1992.]. We
show that if the sampler is part of a larger double sampler then the
construction has an efficient list-decoding algorithm and the list decoding
algorithm is oblivious to the base code (i.e., it runs the unique decoder
for in a black box way).
Our list-decoding algorithm works as follows: it uses a local voting scheme
from which it constructs a unique games constraint graph. The constraint graph
is an expander, so we can solve unique games efficiently. These solutions are
the output of the list decoder. This is a novel use of a unique games algorithm
as a subroutine in a decoding procedure, as opposed to the more common
situation in which unique games are used for demonstrating hardness results.
Double samplers and high dimensional expanders are akin to pseudorandom
objects in their utility, but they greatly exceed random objects in their
combinatorial properties. We believe that these objects hold significant
potential for coding theoretic constructions and view this work as
demonstrating the power of double samplers in this context
- …