180 research outputs found
Hiding Symbols and Functions: New Metrics and Constructions for Information-Theoretic Security
We present information-theoretic definitions and results for analyzing
symmetric-key encryption schemes beyond the perfect secrecy regime, i.e. when
perfect secrecy is not attained. We adopt two lines of analysis, one based on
lossless source coding, and another akin to rate-distortion theory. We start by
presenting a new information-theoretic metric for security, called symbol
secrecy, and derive associated fundamental bounds. We then introduce
list-source codes (LSCs), which are a general framework for mapping a key
length (entropy) to a list size that an eavesdropper has to resolve in order to
recover a secret message. We provide explicit constructions of LSCs, and
demonstrate that, when the source is uniformly distributed, the highest level
of symbol secrecy for a fixed key length can be achieved through a construction
based on minimum-distance separable (MDS) codes. Using an analysis related to
rate-distortion theory, we then show how symbol secrecy can be used to
determine the probability that an eavesdropper correctly reconstructs functions
of the original plaintext. We illustrate how these bounds can be applied to
characterize security properties of symmetric-key encryption schemes, and, in
particular, extend security claims based on symbol secrecy to a functional
setting.Comment: Submitted to IEEE Transactions on Information Theor
Sparse Regression Codes for Multi-terminal Source and Channel Coding
We study a new class of codes for Gaussian multi-terminal source and channel
coding. These codes are designed using the statistical framework of
high-dimensional linear regression and are called Sparse Superposition or
Sparse Regression codes. Codewords are linear combinations of subsets of
columns of a design matrix. These codes were recently introduced by Barron and
Joseph and shown to achieve the channel capacity of AWGN channels with
computationally feasible decoding. They have also recently been shown to
achieve the optimal rate-distortion function for Gaussian sources. In this
paper, we demonstrate how to implement random binning and superposition coding
using sparse regression codes. In particular, with minimum-distance
encoding/decoding it is shown that sparse regression codes attain the optimal
information-theoretic limits for a variety of multi-terminal source and channel
coding problems.Comment: 9 pages, appeared in the Proceedings of the 50th Annual Allerton
Conference on Communication, Control, and Computing - 201
Empirical Coordination with Channel Feedback and Strictly Causal or Causal Encoding
In multi-terminal networks, feedback increases the capacity region and helps
communication devices to coordinate. In this article, we deepen the
relationship between coordination and feedback by considering a point-to-point
scenario with an information source and a noisy channel. Empirical coordination
is achievable if the encoder and the decoder can implement sequences of symbols
that are jointly typical for a target probability distribution. We investigate
the impact of feedback when the encoder has strictly causal or causal
observation of the source symbols. For both cases, we characterize the optimal
information constraints and we show that feedback improves coordination
possibilities. Surprisingly, feedback also reduces the number of auxiliary
random variables and simplifies the information constraints. For empirical
coordination with strictly causal encoding and feedback, the information
constraint does not involve auxiliary random variable anymore.Comment: 5 pages, 6 figures, presented at IEEE International Symposium on
Information Theory (ISIT) 201
Lossy source coding using belief propagation and soft-decimation over LDGM codes
This paper focus on the lossy compression of a binary symmetric source. We propose a new algorithm for binary quantization over low density generator matrix (LDGM) codes. The proposed algorithm is a modified version of the belief propagation (BP) algorithm used in the channel coding framework and has linear complexity in the code block length. We also provide a common framework under which the proposed algorithm and some previously proposed algorithms fit. Simulation results show that our scheme achieves close to state-of-the-art performance with reduced complexity
Entropy-constrained scalar quantization with a lossy-compressed bit
We consider the compression of a continuous real-valued source X using scalar quantizers and average squared error distortion D. Using lossless compression of the quantizer's output, Gish and Pierce showed that uniform quantizing yields the smallest output entropy in the limit D -> 0, resulting in a rate penalty of 0.255 bits/sample above the Shannon Lower Bound (SLB). We present a scalar quantization scheme named lossy-bit entropy-constrained scalar quantization (Lb-ECSQ) that is able to reduce the D -> 0 gap to SLB to 0.251 bits/sample by combining both lossless and binary lossy compression of the quantizer's output. We also study the low-resolution regime and show that Lb-ECSQ significantly outperforms ECSQ in the case of 1-bit quantization.The authors wish to thank Tobias Koch and Gonzalo Vázquez Vilar for fruitful discussions and helpful comments to the manuscript. This work has been supported in part by the European Union 7th Framework Programme through the Marie Curie Initial Training Network “Machine Learning for Personalized Medicine” MLPM2012, Grant No. 316861, by the Spanish Ministry of Economy and Competitiveness and Ministry of Education under grants TEC2016-78434-C3-3-R (MINECO/FEDER, EU) and IJCI-2014-19150, and by Comunidad de Madrid (project ’CASI-CAM-CM’, id. S2013/ICE-2845).Publicad
- …