1,137 research outputs found
Re-proving Channel Polarization Theorems: An Extremality and Robustness Analysis
The general subject considered in this thesis is a recently discovered coding
technique, polar coding, which is used to construct a class of error correction
codes with unique properties. In his ground-breaking work, Ar{\i}kan proved
that this class of codes, called polar codes, achieve the symmetric capacity
--- the mutual information evaluated at the uniform input distribution ---of
any stationary binary discrete memoryless channel with low complexity encoders
and decoders requiring in the order of operations in the
block-length . This discovery settled the long standing open problem left by
Shannon of finding low complexity codes achieving the channel capacity.
Polar coding settled an open problem in information theory, yet opened plenty
of challenging problems that need to be addressed. A significant part of this
thesis is dedicated to advancing the knowledge about this technique in two
directions. The first one provides a better understanding of polar coding by
generalizing some of the existing results and discussing their implications,
and the second one studies the robustness of the theory over communication
models introducing various forms of uncertainty or variations into the
probabilistic model of the channel.Comment: Preview of my PhD Thesis, EPFL, Lausanne, 2014. For the full version,
see http://people.epfl.ch/mine.alsan/publication
DRASIC: Distributed Recurrent Autoencoder for Scalable Image Compression
We propose a new architecture for distributed image compression from a group
of distributed data sources. The work is motivated by practical needs of
data-driven codec design, low power consumption, robustness, and data privacy.
The proposed architecture, which we refer to as Distributed Recurrent
Autoencoder for Scalable Image Compression (DRASIC), is able to train
distributed encoders and one joint decoder on correlated data sources. Its
compression capability is much better than the method of training codecs
separately. Meanwhile, the performance of our distributed system with 10
distributed sources is only within 2 dB peak signal-to-noise ratio (PSNR) of
the performance of a single codec trained with all data sources. We experiment
distributed sources with different correlations and show how our data-driven
methodology well matches the Slepian-Wolf Theorem in Distributed Source Coding
(DSC). To the best of our knowledge, this is the first data-driven DSC
framework for general distributed code design with deep learning
Zero-Delay Joint Source-Channel Coding in the Presence of Interference Known at the Encoder
Zero-delay transmission of a Gaussian source over an additive white Gaussian noise (AWGN) channel is considered in the presence of an additive Gaussian interference signal. The mean squared error (MSE) distortion is minimized under an average power constraint assuming that the interference signal is known at the transmitter. Optimality of simple linear transmission does not hold in this setting due to the presence of the known interference signal. While the optimal encoder-decoder pair remains an open problem, various non-linear transmission schemes are proposed in this paper. In particular, interference concentration (ICO) and one-dimensional lattice (1DL) strategies, using both uniform and non-uniform quantization of the interference signal, are studied. It is shown that, in contrast to typical scalar quantization of Gaussian sources, a non-uniform quantizer, whose quantization intervals become smaller as we go further from zero, improves the performance. Given that the optimal decoder is the minimum MSE (MMSE) estimator, a necessary condition for the optimality of the encoder is derived, and the numerically optimized encoder (NOE) satisfying this condition is obtained. Based on the numerical results, it is shown that 1DL with nonuniform quantization performs closer (compared to the other schemes) to the numerically optimized encoder while requiring significantly lower complexity
When Network Coding and Dirty Paper Coding meet in a Cooperative Ad Hoc Network
We develop and analyze new cooperative strategies for ad hoc networks that
are more spectrally efficient than classical DF cooperative protocols. Using
analog network coding, our strategies preserve the practical half-duplex
assumption but relax the orthogonality constraint. The introduction of
interference due to non-orthogonality is mitigated thanks to precoding, in
particular Dirty Paper coding. Combined with smart power allocation, our
cooperation strategies allow to save time and lead to more efficient use of
bandwidth and to improved network throughput with respect to classical RDF/PDF.Comment: 7 pages, 7 figure
A Rate-Compatible Sphere-Packing Analysis of Feedback Coding with Limited Retransmissions
Recent work by Polyanskiy et al. and Chen et al. has excited new interest in
using feedback to approach capacity with low latency. Polyanskiy showed that
feedback identifying the first symbol at which decoding is successful allows
capacity to be approached with surprisingly low latency. This paper uses Chen's
rate-compatible sphere-packing (RCSP) analysis to study what happens when
symbols must be transmitted in packets, as with a traditional hybrid ARQ
system, and limited to relatively few (six or fewer) incremental transmissions.
Numerical optimizations find the series of progressively growing cumulative
block lengths that enable RCSP to approach capacity with the minimum possible
latency. RCSP analysis shows that five incremental transmissions are sufficient
to achieve 92% of capacity with an average block length of fewer than 101
symbols on the AWGN channel with SNR of 2.0 dB.
The RCSP analysis provides a decoding error trajectory that specifies the
decoding error rate for each cumulative block length. Though RCSP is an
idealization, an example tail-biting convolutional code matches the RCSP
decoding error trajectory and achieves 91% of capacity with an average block
length of 102 symbols on the AWGN channel with SNR of 2.0 dB. We also show how
RCSP analysis can be used in cases where packets have deadlines associated with
them (leading to an outage probability).Comment: To be published at the 2012 IEEE International Symposium on
Information Theory, Cambridge, MA, USA. Updated to incorporate reviewers'
comments and add new figure
A Shannon Approach to Secure Multi-party Computations
In secure multi-party computations (SMC), parties wish to compute a function
on their private data without revealing more information about their data than
what the function reveals. In this paper, we investigate two Shannon-type
questions on this problem. We first consider the traditional one-shot model for
SMC which does not assume a probabilistic prior on the data. In this model,
private communication and randomness are the key enablers to secure computing,
and we investigate a notion of randomness cost and capacity. We then move to a
probabilistic model for the data, and propose a Shannon model for discrete
memoryless SMC. In this model, correlations among data are the key enablers for
secure computing, and we investigate a notion of dependency which permits the
secure computation of a function. While the models and questions are general,
this paper focuses on summation functions, and relies on polar code
constructions
- …