186 research outputs found
Fundamental Limits of "Ankylography" due to Dimensional Deficiency
Single-shot diffractive imaging of truly 3D structures suffers from a
dimensional deficiency and does not scale. The applicability of "ankylography"
is limited to objects that are small-sized in at least one dimension or that
are essentially 2D otherwise.Comment: 2 pages, no figur
Effective Capacity in Broadcast Channels with Arbitrary Inputs
We consider a broadcast scenario where one transmitter communicates with two
receivers under quality-of-service constraints. The transmitter initially
employs superposition coding strategies with arbitrarily distributed signals
and sends data to both receivers. Regarding the channel state conditions, the
receivers perform successive interference cancellation to decode their own
data. We express the effective capacity region that provides the maximum
allowable sustainable data arrival rate region at the transmitter buffer or
buffers. Given an average transmission power limit, we provide a two-step
approach to obtain the optimal power allocation policies that maximize the
effective capacity region. Then, we characterize the optimal decoding regions
at the receivers in the space spanned by the channel fading power values. We
finally substantiate our results with numerical presentations.Comment: This paper will appear in 14th International Conference on
Wired&Wireless Internet Communications (WWIC
Nash Codes for Noisy Channels
This paper studies the stability of communication protocols that deal with
transmission errors. We consider a coordination game between an informed sender
and an uninformed decision maker, the receiver, who communicate over a noisy
channel. The sender's strategy, called a code, maps states of nature to
signals. The receiver's best response is to decode the received channel output
as the state with highest expected receiver payoff. Given this decoding, an
equilibrium or "Nash code" results if the sender encodes every state as
prescribed. We show two theorems that give sufficient conditions for Nash
codes. First, a receiver-optimal code defines a Nash code. A second, more
surprising observation holds for communication over a binary channel which is
used independently a number of times, a basic model of information
transmission: Under a minimal "monotonicity" requirement for breaking ties when
decoding, which holds generically, EVERY code is a Nash code.Comment: More general main Theorem 6.5 with better proof. New examples and
introductio
A Homological Approach to Belief Propagation and Bethe Approximations
We introduce a differential complex of local observables given a
decomposition of a global set of random variables into subsets. Its boundary
operator allows us to define a transport equation equivalent to Belief
Propagation. This definition reveals a set of conserved quantities under Belief
Propagation and gives new insight on the relationship of its equilibria with
the critical points of Bethe free energy.Comment: 14 pages, submitted for the 2019 Geometric Science of Information
colloquiu
Assessing and countering reaction attacks against post-quantum public-key cryptosystems based on QC-LDPC codes
Code-based public-key cryptosystems based on QC-LDPC and QC-MDPC codes are
promising post-quantum candidates to replace quantum vulnerable classical
alternatives. However, a new type of attacks based on Bob's reactions have
recently been introduced and appear to significantly reduce the length of the
life of any keypair used in these systems. In this paper we estimate the
complexity of all known reaction attacks against QC-LDPC and QC-MDPC code-based
variants of the McEliece cryptosystem. We also show how the structure of the
secret key and, in particular, the secret code rate affect the complexity of
these attacks. It follows from our results that QC-LDPC code-based systems can
indeed withstand reaction attacks, on condition that some specific decoding
algorithms are used and the secret code has a sufficiently high rate.Comment: 21 pages, 2 figures, to be presented at CANS 201
On the Decoding Failure Rate of QC-MDPC Bit-Flipping Decoders
International audienceQuasi-cyclic moderate density parity check codes allow the design of McEliece-like public-key encryption schemes with compact keys and a security that provably reduces to hard decoding problems for quasi-cyclic codes.In particular, QC-MDPC are among the most promising code-based key encapsulation mechanisms (KEM) that are proposed to the NIST call for standardization of quantum safe cryptography (two proposals, BIKE and QC-MDPC KEM).The first generation of decoding algorithms suffers from a small, but not negligible, decoding failure rate (DFR in the order of 10⁻⁷ to 10⁻¹⁰). This allows a key recovery attack presented by Guo, Johansson, and Stankovski (GJS attack) at Asiacrypt 2016 which exploits a small correlation between the faulty message patterns and the secret key of the scheme, and limits the usage of the scheme to KEMs using ephemeral public keys. It does not impact the interactive establishment of secure communications (e.g. TLS), but the use of static public keys for asynchronous applications (e.g. email) is rendered dangerous.Understanding and improving the decoding of QC-MDPC is thus of interest for cryptographic applications. In particular, finding parameters for which the failure rate is provably negligible (typically as low as 2⁻⁶⁴ or 2⁻¹²⁸) would allow static keys and increase the applicability of the mentioned cryptosystems.We study here a simple variant of bit-flipping decoding, which we call step-by-step decoding. It has a higher DFR but its evolution can be modeled by a Markov chain, within the theoretical framework of Julia Chaulet's PhD thesis. We study two other, more efficient, decoders. One is the textbook algorithm. The other is (close to) the BIKE decoder. For all those algorithms we provide simulation results, and, assuming an evolution similar to the step-by-step decoder, we extrapolate the value of the DFR as a function of the block length. This will give an indication of how much the code parameters must be increased to ensure resistance to the GJS attack
Analysis of reaction and timing attacks against cryptosystems based on sparse parity-check codes
In this paper we study reaction and timing attacks against cryptosystems
based on sparse parity-check codes, which encompass low-density parity-check
(LDPC) codes and moderate-density parity-check (MDPC) codes. We show that the
feasibility of these attacks is not strictly associated to the quasi-cyclic
(QC) structure of the code but is related to the intrinsically probabilistic
decoding of any sparse parity-check code. So, these attacks not only work
against QC codes, but can be generalized to broader classes of codes. We
provide a novel algorithm that, in the case of a QC code, allows recovering a
larger amount of information than that retrievable through existing attacks and
we use this algorithm to characterize new side-channel information leakages. We
devise a theoretical model for the decoder that describes and justifies our
results. Numerical simulations are provided that confirm the effectiveness of
our approach
Distance Properties of Short LDPC Codes and their Impact on the BP, ML and Near-ML Decoding Performance
Parameters of LDPC codes, such as minimum distance, stopping distance,
stopping redundancy, girth of the Tanner graph, and their influence on the
frame error rate performance of the BP, ML and near-ML decoding over a BEC and
an AWGN channel are studied. Both random and structured LDPC codes are
considered. In particular, the BP decoding is applied to the code parity-check
matrices with an increasing number of redundant rows, and the convergence of
the performance to that of the ML decoding is analyzed. A comparison of the
simulated BP, ML, and near-ML performance with the improved theoretical bounds
on the error probability based on the exact weight spectrum coefficients and
the exact stopping size spectrum coefficients is presented. It is observed that
decoding performance very close to the ML decoding performance can be achieved
with a relatively small number of redundant rows for some codes, for both the
BEC and the AWGN channels
High girth column-weight-two LDPC codes based on distance graphs
Copyright © 2007 G. Malema and M. Liebelt. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.LDPC codes of column weight of two are constructed from minimal distance graphs or cages. Distance graphs are used to represent LDPC code matrices such that graph vertices that represent rows and edges are columns. The conversion of a distance graph into matrix form produces an adjacency matrix with column weight of two and girth double that of the graph. The number of 1's in each row (row weight) is equal to the degree of the corresponding vertex. By constructing graphs with different vertex degrees, we can vary the rate of corresponding LDPC code matrices. Cage graphs are used as examples of distance graphs to design codes with different girths and rates. Performance of obtained codes depends on girth and structure of the corresponding distance graphs.Gabofetswe Malema and Michael Liebel
Optimal skeleton huffman trees revisited
A skeleton Huffman tree is a Huffman tree in which all disjoint maximal perfect subtrees are shrunk into leaves. Skeleton Huffman trees, besides saving storage space, are also used for faster decoding and for speeding up Huffman-shaped wavelet trees. In 2017 Klein et al. introduced an optimal skeleton tree: for given symbol frequencies, it has the least number of nodes among all optimal prefix-free code trees (not necessarily Huffman’s) with shrunk perfect subtrees. Klein et al. described a simple algorithm that, for fixed codeword lengths, finds a skeleton tree with the least number of nodes; with this algorithm one can process each set of optimal codeword lengths to find an optimal skeleton tree. However, there are exponentially many such sets in the worst case. We describe an (formula presented)-time algorithm that, given n symbol frequencies, constructs an optimal skeleton tree and its corresponding optimal code. © Springer Nature Switzerland AG 2020.Supported by the Russian Science Foundation (RSF), project 18-71-00002
- …