65,715 research outputs found
Side Channel Information Set Decoding using Iterative Chunking
This paper presents an attack based on side-channel information and Information Set Decoding (ISD) on the Niederreiter cryptosystem and an evaluation of the practicality of the attack using an electromagnetic side channel. First, we describe a basic plaintext-recovery attack on the decryption algorithm of the Niederreiter cryptosystem. In case the cryptosystem is used as Key-Encapsulation Mechanism (KEM) in a key exchange, the plaintext corresponds to a session key. Our attack is an adaptation of the timing side-channel plaintext-recovery attack by Shoufan et al. from 2010 on the McEliece cryptosystem using the non-constant time Patterson’s decoding algorithm to the Niederreiter cryptosystem using the constant time Berlekamp-Massey decoding algorithm. We then enhance our attack by utilizing an ISD approach to support the basic attack and we introduce iterative column chunking to further significantly reduce the number of required side-channel measurements. We theoretically show that our attack improvements have a significant impact on reducing the number of required side-channel measurements. Our practical evaluation of the attack targets the FPGA-implementation of the Niederreiter cryptosystem in the NIST submission Classic McEliece with a constant time decoding algorithm and is feasible for all proposed parameters sets of this submission. For example, for the 256bit-security parameter set kem/mceliece6960119 we improve the basic attack that requires 5415 measurements to on average of about 560
measurements to mount a successful plaintext recovery attack. Further reductions can be achieved at increasing cost of the ISD computations
An upper bound on relaying over capacity based on channel simulation
The upper bound on the capacity of a 3-node discrete memoryless relay channel
is considered, where a source X wants to send information to destination Y with
the help of a relay Z. Y and Z are independent given X, and the link from Z to
Y is lossless with rate . A new inequality is introduced to upper-bound
the capacity when the encoding rate is beyond the capacities of both individual
links XY and XZ. It is based on generalization of the blowing-up lemma, linking
conditional entropy to decoding error, and channel simulation, to the case with
side information. The achieved upper-bound is strictly better than the
well-known cut-set bound in several cases when the latter is , with
being the channel capacity between X and Y. One particular case is
when the channel is statistically degraded, i.e., either Y is a statistically
degraded version of Z with respect to X, or Z is a statistically degraded
version of Y with respect to X. Moreover in this case, the bound is shown to be
explicitly computable. The binary erasure channel is analyzed in detail and
evaluated numerically.Comment: Submitted to IEEE Transactions on Information Theory, 21 pages, 6
figure
How to Lose Some Weight - A Practical Template Syndrome Decoding Attack
We study the hardness of the Syndrome Decoding problem, the base of most code-based cryptographic schemes, such as Classic McEliece, in the presence of side-channel information. We use ChipWhisperer equipment to perform a template attack on Classic McEliece running on an ARM Cortex-M4, and accurately classify the Hamming weights of consecutive 32-bit blocks of the secret error vector. With these weights at hand, we optimize Information Set Decoding algorithms. Technically, we show how to speed up information set decoding via a dimension reduction, additional parity-check equations, and an improved information set search, all derived from the Hamming weight information.
Consequently, using our template attack, we can practically recover an error vector in dimension n=2197 in a matter of seconds. Without side-channel information, such an instance has a complexity of around 88 bit.
We also estimate how our template attack affects the security of the proposed McEliece parameter sets. Roughly speaking, even an error-prone leak of our Hamming weight information leads for n=3488 to a security drop of 89 bits
Bit-Interleaved Coded Modulation Revisited: A Mismatched Decoding Perspective
We revisit the information-theoretic analysis of bit-interleaved coded
modulation (BICM) by modeling the BICM decoder as a mismatched decoder. The
mismatched decoding model is well-defined for finite, yet arbitrary, block
lengths, and naturally captures the channel memory among the bits belonging to
the same symbol. We give two independent proofs of the achievability of the
BICM capacity calculated by Caire et al. where BICM was modeled as a set of
independent parallel binary-input channels whose output is the bitwise
log-likelihood ratio. Our first achievability proof uses typical sequences, and
shows that due to the random coding construction, the interleaver is not
required. The second proof is based on the random coding error exponents with
mismatched decoding, where the largest achievable rate is the generalized
mutual information. We show that the generalized mutual information of the
mismatched decoder coincides with the infinite-interleaver BICM capacity. We
also show that the error exponent -and hence the cutoff rate- of the BICM
mismatched decoder is upper bounded by that of coded modulation and may thus be
lower than in the infinite-interleaved model. We also consider the mutual
information appearing in the analysis of iterative decoding of BICM with EXIT
charts. We show that the corresponding symbol metric has knowledge of the
transmitted symbol and the EXIT mutual information admits a representation as a
pseudo-generalized mutual information, which is in general not achievable. A
different symbol decoding metric, for which the extrinsic side information
refers to the hypothesized symbol, induces a generalized mutual information
lower than the coded modulation capacity.Comment: submitted to the IEEE Transactions on Information Theory. Conference
version in 2008 IEEE International Symposium on Information Theory, Toronto,
Canada, July 200
On Joint Source-Channel Coding for Correlated Sources Over Multiple-Access Relay Channels
We study the transmission of correlated sources over discrete memoryless (DM)
multiple-access-relay channels (MARCs), in which both the relay and the
destination have access to side information arbitrarily correlated with the
sources. As the optimal transmission scheme is an open problem, in this work we
propose a new joint source-channel coding scheme based on a novel combination
of the correlation preserving mapping (CPM) technique with Slepian-Wolf (SW)
source coding, and obtain the corresponding sufficient conditions. The proposed
coding scheme is based on the decode-and-forward strategy, and utilizes CPM for
encoding information simultaneously to the relay and the destination, whereas
the cooperation information from the relay is encoded via SW source coding. It
is shown that there are cases in which the new scheme strictly outperforms the
schemes available in the literature. This is the first instance of a
source-channel code that uses CPM for encoding information to two different
nodes (relay and destination). In addition to sufficient conditions, we present
three different sets of single-letter necessary conditions for reliable
transmission of correlated sources over DM MARCs. The newly derived conditions
are shown to be at least as tight as the previously known necessary conditions.Comment: Accepted to TI
- …