91 research outputs found
Distributed Successive Approximation Coding using Broadcast Advantage: The Two-Encoder Case
Traditional distributed source coding rarely considers the possible link
between separate encoders. However, the broadcast nature of wireless
communication in sensor networks provides a free gossip mechanism which can be
used to simplify encoding/decoding and reduce transmission power. Using this
broadcast advantage, we present a new two-encoder scheme which imitates the
ping-pong game and has a successive approximation structure. For the quadratic
Gaussian case, we prove that this scheme is successively refinable on the
{sum-rate, distortion pair} surface, which is characterized by the
rate-distortion region of the distributed two-encoder source coding. A
potential energy saving over conventional distributed coding is also
illustrated. This ping-pong distributed coding idea can be extended to the
multiple encoder case and provides the theoretical foundation for a new class
of distributed image coding method in wireless scenarios.Comment: In Proceedings of the 48th Annual Allerton Conference on
Communication, Control and Computing, University of Illinois, Monticello, IL,
September 29 - October 1, 201
Multiuser Successive Refinement and Multiple Description Coding
We consider the multiuser successive refinement (MSR) problem, where the
users are connected to a central server via links with different noiseless
capacities, and each user wishes to reconstruct in a successive-refinement
fashion. An achievable region is given for the two-user two-layer case and it
provides the complete rate-distortion region for the Gaussian source under the
MSE distortion measure. The key observation is that this problem includes the
multiple description (MD) problem (with two descriptions) as a subsystem, and
the techniques useful in the MD problem can be extended to this case. We show
that the coding scheme based on the universality of random binning is
sub-optimal, because multiple Gaussian side informations only at the decoders
do incur performance loss, in contrast to the case of single side information
at the decoder. We further show that unlike the single user case, when there
are multiple users, the loss of performance by a multistage coding approach can
be unbounded for the Gaussian source. The result suggests that in such a
setting, the benefit of using successive refinement is not likely to justify
the accompanying performance loss. The MSR problem is also related to the
source coding problem where each decoder has its individual side information,
while the encoder has the complete set of the side informations. The MSR
problem further includes several variations of the MD problem, for which the
specialization of the general result is investigated and the implication is
discussed.Comment: 10 pages, 5 figures. To appear in IEEE Transaction on Information
Theory. References updated and typos correcte
Multiresolution vector quantization
Multiresolution source codes are data compression algorithms yielding embedded source descriptions. The decoder of a multiresolution code can build a source reproduction by decoding the embedded bit stream in part or in whole. All decoding procedures start at the beginning of the binary source description and decode some fraction of that string. Decoding a small portion of the binary string gives a low-resolution reproduction; decoding more yields a higher resolution reproduction; and so on. Multiresolution vector quantizers are block multiresolution source codes. This paper introduces algorithms for designing fixed- and variable-rate multiresolution vector quantizers. Experiments on synthetic data demonstrate performance close to the theoretical performance limit. Experiments on natural images demonstrate performance improvements of up to 8 dB over tree-structured vector quantizers. Some of the lessons learned through multiresolution vector quantizer design lend insight into the design of more sophisticated multiresolution codes
Incremental Refinements and Multiple Descriptions with Feedback
It is well known that independent (separate) encoding of K correlated sources
may incur some rate loss compared to joint encoding, even if the decoding is
done jointly. This loss is particularly evident in the multiple descriptions
problem, where the sources are repetitions of the same source, but each
description must be individually good. We observe that under mild conditions
about the source and distortion measure, the rate ratio Rindependent(K)/Rjoint
goes to one in the limit of small rate/high distortion. Moreover, we consider
the excess rate with respect to the rate-distortion function, Rindependent(K,
M) - R(D), in M rounds of K independent encodings with a final distortion level
D. We provide two examples - a Gaussian source with mean-squared error and an
exponential source with one-sided error - for which the excess rate vanishes in
the limit as the number of rounds M goes to infinity, for any fixed D and K.
This result has an interesting interpretation for a multi-round variant of the
multiple descriptions problem, where after each round the encoder gets a
(block) feedback regarding which of the descriptions arrived: In the limit as
the number of rounds M goes to infinity (i.e., many incremental rounds), the
total rate of received descriptions approaches the rate-distortion function. We
provide theoretical and experimental evidence showing that this phenomenon is
in fact more general than in the two examples above.Comment: 62 pages. Accepted in the IEEE Transactions on Information Theor
Bandwidth-Agile Image Transmission with Deep Joint Source-Channel Coding
We propose deep learning based communication methods for adaptive-bandwidth transmission of images over wireless channels. We consider the scenario in which images are transmitted progressively in layers over time or frequency, and such layers can be aggregated by receivers in order to increase the quality of their reconstructions. We investigate two scenarios, one in which the layers are sent sequentially, and incrementally contribute to the refinement of a reconstruction, and another in which the layers are independent and can be retrieved in any order. Those scenarios correspond to the well known problems of successive refinement and multiple descriptions, respectively, in the context of joint source-channel coding (JSCC). We propose DeepJSCC-l, an innovative solution that uses convolutional autoencoders, and present three architectures with different complexity trade-offs. To the best of our knowledge, this is the first practical multiple-description JSCC scheme developed and tested for practical information sources and channels. Numerical results show that DeepJSCC-l can learn to transmit the source progressively with negligible losses in the end-to-end performance compared with a single transmission. Moreover, DeepJSCC-l has comparable performance with state of the art digital progressive transmission schemes in the challenging low signal-to-noise ratio (SNR) and small bandwidth regimes, with the additional advantage of graceful degradation with channel SNR
- …