4,657 research outputs found
Classical capacity of the lossy bosonic channel: the exact solution
The classical capacity of the lossy bosonic channel is calculated exactly. It
is shown that its Holevo information is not superadditive, and that a
coherent-state encoding achieves capacity. The capacity of far-field,
free-space optical communications is given as an example.Comment: 4 pages, 2 figures (revised version
The Reliability Function of Lossy Source-Channel Coding of Variable-Length Codes with Feedback
We consider transmission of discrete memoryless sources (DMSes) across
discrete memoryless channels (DMCs) using variable-length lossy source-channel
codes with feedback. The reliability function (optimum error exponent) is shown
to be equal to where is the rate-distortion
function of the source, is the maximum relative entropy between output
distributions of the DMC, and is the Shannon capacity of the channel. We
show that, in this setting and in this asymptotic regime, separate
source-channel coding is, in fact, optimal.Comment: Accepted to IEEE Transactions on Information Theory in Apr. 201
A Progressive Universal Noiseless Coder
The authors combine pruned tree-structured vector quantization (pruned TSVQ) with Itoh's (1987) universal noiseless coder. By combining pruned TSVQ with universal noiseless coding, they benefit from the “successive approximation” capabilities of TSVQ, thereby allowing progressive transmission of images, while retaining the ability to noiselessly encode images of unknown statistics in a provably asymptotically optimal fashion. Noiseless compression results are comparable to Ziv-Lempel and arithmetic coding for both images and finely quantized Gaussian sources
Lecture Notes on Network Information Theory
These lecture notes have been converted to a book titled Network Information
Theory published recently by Cambridge University Press. This book provides a
significantly expanded exposition of the material in the lecture notes as well
as problems and bibliographic notes at the end of each chapter. The authors are
currently preparing a set of slides based on the book that will be posted in
the second half of 2012. More information about the book can be found at
http://www.cambridge.org/9781107008731/. The previous (and obsolete) version of
the lecture notes can be found at http://arxiv.org/abs/1001.3404v4/
Rate-distortion Balanced Data Compression for Wireless Sensor Networks
This paper presents a data compression algorithm with error bound guarantee
for wireless sensor networks (WSNs) using compressing neural networks. The
proposed algorithm minimizes data congestion and reduces energy consumption by
exploring spatio-temporal correlations among data samples. The adaptive
rate-distortion feature balances the compressed data size (data rate) with the
required error bound guarantee (distortion level). This compression relieves
the strain on energy and bandwidth resources while collecting WSN data within
tolerable error margins, thereby increasing the scale of WSNs. The algorithm is
evaluated using real-world datasets and compared with conventional methods for
temporal and spatial data compression. The experimental validation reveals that
the proposed algorithm outperforms several existing WSN data compression
methods in terms of compression efficiency and signal reconstruction. Moreover,
an energy analysis shows that compressing the data can reduce the energy
expenditure, and hence expand the service lifespan by several folds.Comment: arXiv admin note: text overlap with arXiv:1408.294
- …