18 research outputs found
Adversarial Wiretap Channel with Public Discussion
Wyner's elegant model of wiretap channel exploits noise in the communication
channel to provide perfect secrecy against a computationally unlimited
eavesdropper without requiring a shared key. We consider an adversarial model
of wiretap channel proposed in [18,19] where the adversary is active: it
selects a fraction of the transmitted codeword to eavesdrop and a
fraction of the codeword to corrupt by "adding" adversarial error. It
was shown that this model also captures network adversaries in the setting of
1-round Secure Message Transmission [8]. It was proved that secure
communication (1-round) is possible if and only if .
In this paper we show that by allowing communicants to have access to a
public discussion channel (authentic communication without secrecy) secure
communication becomes possible even if . We formalize the
model of \awtppd protocol and for two efficiency measures, {\em information
rate } and {\em message round complexity} derive tight bounds. We also
construct a rate optimal protocol family with minimum number of message rounds.
We show application of these results to Secure Message Transmission with Public
Discussion (SMT-PD), and in particular show a new lower bound on transmission
rate of these protocols together with a new construction of an optimal SMT-PD
protocol
Near-optimal Assembly for Shotgun Sequencing with Noisy Reads
Recent work identified the fundamental limits on the information requirements
in terms of read length and coverage depth required for successful de novo
genome reconstruction from shotgun sequencing data, based on the idealistic
assumption of no errors in the reads (noiseless reads). In this work, we show
that even when there is noise in the reads, one can successfully reconstruct
with information requirements close to the noiseless fundamental limit. A new
assembly algorithm, X-phased Multibridging, is designed based on a
probabilistic model of the genome. It is shown through analysis to perform well
on the model, and through simulations to perform well on real genomes
An Entropy Sumset Inequality and Polynomially Fast Convergence to Shannon Capacity Over All Alphabets
We prove a lower estimate on the increase in entropy when two copies of a conditional random variable X | Y, with X supported on Z_q={0,1,...,q-1} for prime q, are summed modulo q. Specifically, given two i.i.d. copies (X_1,Y_1) and (X_2,Y_2) of a pair of random variables (X,Y), with X taking values in Z_q, we show
H(X_1 + X_2 mid Y_1, Y_2) - H(X|Y) >=e alpha(q) * H(X|Y) (1-H(X|Y))
for some alpha(q) > 0, where H(.) is the normalized (by factor log_2(q)) entropy. In particular, if X | Y is not close to being fully random or fully deterministic and H(X| Y) in (gamma,1-gamma), then the entropy of the sum increases by Omega_q(gamma). Our motivation is an effective analysis of the finite-length behavior of polar codes, for which the linear dependence on gamma is quantitatively important. The assumption of q being prime is necessary: for X supported uniformly on a proper subgroup of Z_q we have H(X+X)=H(X). For X supported on infinite groups without a finite subgroup (the torsion-free case) and no conditioning, a sumset inequality for the absolute increase in (unnormalized) entropy was shown by Tao in [Tao, CP&R 2010].
We use our sumset inequality to analyze Ari kan\u27s construction of polar codes and prove that for any q-ary source X, where q is any fixed prime, and anyepsilon > 0, polar codes allow efficient data compression of N i.i.d. copies of X into (H(X)+epsilon)N q-ary symbols, as soon as N is polynomially large in 1/epsilon. We can get capacity-achieving source codes with similar guarantees for composite alphabets, by factoring q into primes and combining different polar codes for each prime in factorization.
A consequence of our result for noisy channel coding is that for all discrete memoryless channels, there are explicit codes enabling reliable communication within epsilon > 0 of the symmetric Shannon capacity for a block length and decoding complexity bounded by a polynomial in 1/epsilon. The result was previously shown for the special case of binary-input channels [Guruswami/Xial, FOCS\u2713; Hassani/Alishahi/Urbanke, CoRR 2013], and this work extends the result to channels over any alphabet
Fooling an Unbounded Adversary with a Short Key, Repeatedly: The Honey Encryption Perspective
This article is motivated by the classical results from Shannon that put the simple and elegant one-time pad away from practice: key length has to be as large as message length and the same key could not be used more than once. In particular, we consider encryption algorithm to be defined relative to specific message distributions in order to trade for unconditional security. Such a notion named honey encryption (HE) was originally proposed for achieving best possible security for password based encryption where secrete key may have very small amount of entropy.
Exploring message distributions as in HE indeed helps circumvent the classical restrictions on secret keys.We give a new and very simple honey encryption scheme satisfying the unconditional semantic security (for the targeted message distribution) in the standard model (all previous constructions are in the random oracle model, even for message recovery security only). Our new construction can be paired with an extremely simple yet "tighter" analysis, while all previous analyses (even for message recovery security only) were fairly complicated and require stronger assumptions. We also show a concrete instantiation further enables the secret key to be used for encrypting multiple messages
General Strong Polarization
Arikan's exciting discovery of polar codes has provided an altogether new way
to efficiently achieve Shannon capacity. Given a (constant-sized) invertible
matrix , a family of polar codes can be associated with this matrix and its
ability to approach capacity follows from the {\em polarization} of an
associated -bounded martingale, namely its convergence in the limit to
either or . Arikan showed polarization of the martingale associated with
the matrix to get
capacity achieving codes. His analysis was later extended to all matrices
that satisfy an obvious necessary condition for polarization.
While Arikan's theorem does not guarantee that the codes achieve capacity at
small blocklengths, it turns out that a "strong" analysis of the polarization
of the underlying martingale would lead to such constructions. Indeed for the
martingale associated with such a strong polarization was shown in two
independent works ([Guruswami and Xia, IEEE IT '15] and [Hassani et al., IEEE
IT '14]), resolving a major theoretical challenge of the efficient attainment
of Shannon capacity.
In this work we extend the result above to cover martingales associated with
all matrices that satisfy the necessary condition for (weak) polarization. In
addition to being vastly more general, our proofs of strong polarization are
also simpler and modular. Specifically, our result shows strong polarization
over all prime fields and leads to efficient capacity-achieving codes for
arbitrary symmetric memoryless channels. We show how to use our analyses to
achieve exponentially small error probabilities at lengths inverse polynomial
in the gap to capacity. Indeed we show that we can essentially match any error
probability with lengths that are only inverse polynomial in the gap to
capacity.Comment: 73 pages, 2 figures. The final version appeared in JACM. This paper
combines results presented in preliminary form at STOC 2018 and RANDOM 201
A Side-Channel Assisted Cryptanalytic Attack Against QcBits
International audienceQcBits is a code-based public key algorithm based on a problem thought to be resistant to quantum computer attacks. It is a constant-time implementation for a quasi-cyclic moderate density parity check (QC-MDPC) Niederreiter encryption scheme, and has excellent performance and small key sizes. In this paper, we present a key recovery attack against QcBits. We first used differential power analysis (DPA) against the syndrome computation of the decoding algorithm to recover partial information about one half of the private key. We then used the recovered information to set up a system of noisy binary linear equations. Solving this system of equations gave us the entire key. Finally, we propose a simple but effective countermeasure against the power analysis used during the syndrome calculation
The Design Space of Lightweight Cryptography
International audienceFor constrained devices, standard cryptographic algorithms can be too big, too slow or too energy-consuming. The area of lightweight cryptography studies new algorithms to overcome these problems. In this paper, we will focus on symmetric-key encryption, authentication and hashing. Instead of providing a full overview of this area of research, we will highlight three interesting topics. Firstly, we will explore the generic security of lightweight constructions. In particular, we will discuss considerations for key, block and tag sizes, and explore the topic of instantiating a pseudorandom permutation (PRP) with a non-ideal block cipher construction. This is inspired by the increasing prevalence of lightweight designs that are not secure against related-key attacks, such as PRINCE, PRIDE or Chaskey. Secondly, we explore the efficiency of cryptographic primitives. In particular, we investigate the impact on efficiency when the input size of a primitive doubles. Lastly, we provide some considerations for cryptographic design. We observe that applications do not always use cryptographic algorithms as they were intended, which negatively impacts the security and/or efficiency of the resulting implementations