121 research outputs found
Polynomial-Time, Semantically-Secure Encryption Achieving the Secrecy Capacity
In the wiretap channel setting, one aims to get information-theoretic privacy
of communicated data based only on the assumption that the channel from sender
to receiver is noisier than the one from sender to adversary. The secrecy
capacity is the optimal (highest possible) rate of a secure scheme, and the
existence of schemes achieving it has been shown. For thirty years the ultimate
and unreached goal has been to achieve this optimal rate with a scheme that is
polynomial-time. (This means both encryption and decryption are proven
polynomial time algorithms.) This paper finally delivers such a scheme. In fact
it does more. Our scheme not only meets the classical notion of security from
the wiretap literature, called MIS-R (mutual information security for random
messages) but achieves the strictly stronger notion of semantic security, thus
delivering more in terms of security without loss of rate
Optimally Secure Block Ciphers from Ideal Primitives
Recent advances in block-cipher theory deliver security analyses in
models where one or more underlying components (e.g., a function or
a permutation) are {\em ideal} (i.e., randomly chosen). This paper
addresses the question of finding {\em new} constructions achieving
the highest possible security level under minimal assumptions in
such ideal models.
We present a new block-cipher construction, derived from the
Swap-or-Not construction by Hoang et al. (CRYPTO \u2712). With -bit
block length, our construction is a secure pseudorandom permutation
(PRP) against attackers making block-cipher
queries, and queries to the underlying component
(which has itself domain size roughly ). This security level is
nearly optimal. So far, only key-alternating ciphers have been known
to achieve comparable security levels using independent
random permutations. In contrast, here we only assume that a {\em
single} {\em function} or {\em permutation} is available, while
achieving similar efficiency.
Our second contribution is a generic method to enhance a block
cipher, initially only secure as a PRP, to achieve related-key
security with comparable quantitative security
LNCS
HMAC and its variant NMAC are the most popular approaches to deriving a MAC (and more generally, a PRF) from a cryptographic hash function. Despite nearly two decades of research, their exact security still remains far from understood in many different contexts. Indeed, recent works have re-surfaced interest for {\em generic} attacks, i.e., attacks that treat the compression function of the underlying hash function as a black box.
Generic security can be proved in a model where the underlying compression function is modeled as a random function -- yet, to date, the question of proving tight, non-trivial bounds on the generic security of HMAC/NMAC even as a PRF remains a challenging open question.
In this paper, we ask the question of whether a small modification to HMAC and NMAC can allow us to exactly characterize the security of the resulting constructions, while only incurring little penalty with respect to efficiency. To this end, we present simple variants of NMAC and HMAC, for which we prove tight bounds on the generic PRF security, expressed in terms of numbers of construction and compression function queries necessary to break the construction. All of our constructions are obtained via a (near) {\em black-box} modification of NMAC and HMAC, which can be interpreted as an initial step of key-dependent message pre-processing.
While our focus is on PRF security, a further attractive feature of our new constructions is that they clearly defeat all recent generic attacks against properties such as state recovery and universal forgery. These exploit properties of the so-called ``functional graph'' which are not directly accessible in our new constructions
Bounds on inference
Lower bounds for the average probability of error of estimating a hidden
variable X given an observation of a correlated random variable Y, and Fano's
inequality in particular, play a central role in information theory. In this
paper, we present a lower bound for the average estimation error based on the
marginal distribution of X and the principal inertias of the joint distribution
matrix of X and Y. Furthermore, we discuss an information measure based on the
sum of the largest principal inertias, called k-correlation, which generalizes
maximal correlation. We show that k-correlation satisfies the Data Processing
Inequality and is convex in the conditional distribution of Y given X. Finally,
we investigate how to answer a fundamental question in inference and privacy:
given an observation Y, can we estimate a function f(X) of the hidden random
variable X with an average error below a certain threshold? We provide a
general method for answering this question using an approach based on
rate-distortion theory.Comment: Allerton 2013 with extended proof, 10 page
Efficient and Optimally Secure Key-Length Extension for Block Ciphers via Randomized Cascading
We consider the question of efficiently extending the key length of block ciphers. To date, the approach providing highest security is triple encryption (used e.g. in Triple-DES), which was proved to have roughly k + min{n/2, k/2} bits of security when instantiated with ideal block ciphers with key length k and block length n, at the cost of three block-cipher calls per message block.
This paper presents a new practical key-length extension scheme exhibiting k + n/2 bits of security – hence improving upon the security of triple encryption – solely at the cost of two block cipher calls and a key of length k + n. We also provide matching generic attacks showing the optimality of the security level achieved by our approach with respect to a general class of two-query constructions
Indistinguishability Obfuscation from Trilinear Maps and Block-Wise Local PRGs
We consider the question of finding the lowest degree for which -linear maps suffice to obtain IO. The current state of the art (Lin, EUROCRYPT\u2716, CRYPTO \u2717; Lin and Vaikunthanathan, FOCS\u2716; Ananth and Sahai, EUROCRYPT \u2717) is that -linear maps (under suitable security assumptions) suffice for IO, assuming the existence of pseudo-random generators (PRGs) with output locality . However, these works cannot answer the question of whether suffices, as no polynomial-stretch PRG with locality lower than exists.
In this work, we present a new approach that relies on the existence of PRGs with block-wise locality , i.e., every output bit depends on at most (disjoint) input blocks, each consisting of up to input bits. We show that the existence of PRGs with block-wise locality is plausible for any , and also provide:
* A construction of a general-purpose indistinguishability obfuscator from -linear maps and a subexponentially-secure PRG with block-wise locality and polynomial stretch.
* A construction of general-purpose functional encryption from -linear maps and any slightly super-polynomially secure PRG with block-wise locality and polynomial stretch.
All our constructions are based on the SXDH assumption on -linear maps and subexponential Learning With Errors (LWE) assumption, and follow by instantiating our new generic bootstrapping theorems with Lin\u27s recently proposed FE scheme (CRYPTO \u2717). Inherited from Lin\u27s work, our security proof requires algebraic multilinear maps (Boneh and Silverberg, Contemporary Mathematics), whereas security when using noisy multilinear maps is based on a family of more complex assumptions that hold in the generic model.
Our candidate PRGs with block-wise locality are based on Goldreich\u27s local functions, and we show that the security of instantiations with block-wise locality is backed by similar validation as constructions with (conventional) locality . We further complement this with hardness amplification techniques that further weaken the pseudorandomness requirements
Threshold and Multi-Signature Schemes from Linear Hash Functions
This paper gives new constructions of two-round multi-signatures and threshold signatures for which security relies solely on either the hardness of the (plain) discrete logarithm problem or the hardness of RSA, in addition to assuming random oracles. Their signing protocol is partially non-interactive, i.e., the first round of the signing protocol is independent of the message being signed.
We obtain our constructions by generalizing the most efficient discrete- logarithm based schemes, MuSig2 (Nick, Ruffing, and Seurin, CRYPTO ’21) and FROST (Komlo and Goldberg, SAC ’20), to work with suitably defined linear hash functions. While the original schemes rely on the stronger and more controversial one-more discrete logarithm assumption, we show that suitable instantiations of the hash functions enable security to be based on either the plain discrete logarithm assumption or on RSA. The signatures produced by our schemes are equivalent to those obtained from Okamoto’s identification schemes (CRYPTO ’92).
More abstractly, our results suggest a general framework to transform schemes secure under OMDL into ones secure under the plain DL assumption and, with some restrictions, under RSA
Revisiting BBS Signatures
BBS signatures were implicitly proposed by Boneh, Boyen, and Shacham (CRYPTO ’04) as part of their group signature scheme, and explicitly cast as stand-alone signatures by Camenisch and Lysyanskaya (CRYPTO ’04). A provably secure version, called BBS+, was then devised by Au, Susilo, and Mu (SCN ’06), and is currently the object of a standardization effort which has led to a recent RFC draft. BBS+ signatures are suitable for use within anonymous credential and DAA systems, as their algebraic structure enables efficient proofs of knowledge of message-signature pairs that support partial disclosure.
BBS+ signatures consist of one group element and two scalars. As our first contribution, we prove that a variant of BBS+ producing shorter signatures, consisting only of one group element and one scalar, is also secure. The resulting scheme is essentially the original BBS proposal, which was lacking a proof of security. Here we show it satisfies, under the q-SDH assumption, the same provable security guarantees as BBS+. We also provide a complementary tight analysis in the algebraic group model, which heuristically justifies instantiations with potentially shorter signatures.
Furthermore, we devise simplified and shorter zero-knowledge proofs of knowledge of a BBS message-signature pair that support partial disclosure of the message. Over the BLS12-381 curve, our proofs are 896 bits shorter than the prior proposal by Camenisch, Drijvers, and Lehmann (TRUST ’16), which is also adopted by the RFC draft.
Finally, we show that BBS satisfies one-more unforgeability in the algebraic group model in a scenario, arising in the context of credentials, where the signer can be asked to sign arbitrary group elements, meant to be commitments, without seeing their openings
- …