185,772 research outputs found
DRASIC: Distributed Recurrent Autoencoder for Scalable Image Compression
We propose a new architecture for distributed image compression from a group
of distributed data sources. The work is motivated by practical needs of
data-driven codec design, low power consumption, robustness, and data privacy.
The proposed architecture, which we refer to as Distributed Recurrent
Autoencoder for Scalable Image Compression (DRASIC), is able to train
distributed encoders and one joint decoder on correlated data sources. Its
compression capability is much better than the method of training codecs
separately. Meanwhile, the performance of our distributed system with 10
distributed sources is only within 2 dB peak signal-to-noise ratio (PSNR) of
the performance of a single codec trained with all data sources. We experiment
distributed sources with different correlations and show how our data-driven
methodology well matches the Slepian-Wolf Theorem in Distributed Source Coding
(DSC). To the best of our knowledge, this is the first data-driven DSC
framework for general distributed code design with deep learning
On the rate-distortion performance and computational efficiency of the Karhunen-Loeve transform for lossy data compression
We examine the rate-distortion performance and computational complexity of linear transforms for lossy data compression. The goal is to better understand the performance/complexity tradeoffs associated with using the Karhunen-Loeve transform (KLT) and its fast approximations. Since the optimal transform for transform coding is unknown in general, we investigate the performance penalties associated with using the KLT by examining cases where the KLT fails, developing a new transform that corrects the KLT's failures in those examples, and then empirically testing the performance difference between this new transform and the KLT. Experiments demonstrate that while the worst KLT can yield transform coding performance at least 3 dB worse than that of alternative block transforms, the performance penalty associated with using the KLT on real data sets seems to be significantly smaller, giving at most 0.5 dB difference in our experiments. The KLT and its fast variations studied here range in complexity requirements from O(n^2) to O(n log n) in coding vectors of dimension n. We empirically investigate the rate-distortion performance tradeoffs associated with traversing this range of options. For example, an algorithm with complexity O(n^3/2) and memory O(n) gives 0.4 dB performance loss relative to the full KLT in our image compression experiment
Multi-View Video Packet Scheduling
In multiview applications, multiple cameras acquire the same scene from
different viewpoints and generally produce correlated video streams. This
results in large amounts of highly redundant data. In order to save resources,
it is critical to handle properly this correlation during encoding and
transmission of the multiview data. In this work, we propose a
correlation-aware packet scheduling algorithm for multi-camera networks, where
information from all cameras are transmitted over a bottleneck channel to
clients that reconstruct the multiview images. The scheduling algorithm relies
on a new rate-distortion model that captures the importance of each view in the
scene reconstruction. We propose a problem formulation for the optimization of
the packet scheduling policies, which adapt to variations in the scene content.
Then, we design a low complexity scheduling algorithm based on a trellis search
that selects the subset of candidate packets to be transmitted towards
effective multiview reconstruction at clients. Extensive simulation results
confirm the gain of our scheduling algorithm when inter-source correlation
information is used in the scheduler, compared to scheduling policies with no
information about the correlation or non-adaptive scheduling policies. We
finally show that increasing the optimization horizon in the packet scheduling
algorithm improves the transmission performance, especially in scenarios where
the level of correlation rapidly varies with time
Distributed Successive Approximation Coding using Broadcast Advantage: The Two-Encoder Case
Traditional distributed source coding rarely considers the possible link
between separate encoders. However, the broadcast nature of wireless
communication in sensor networks provides a free gossip mechanism which can be
used to simplify encoding/decoding and reduce transmission power. Using this
broadcast advantage, we present a new two-encoder scheme which imitates the
ping-pong game and has a successive approximation structure. For the quadratic
Gaussian case, we prove that this scheme is successively refinable on the
{sum-rate, distortion pair} surface, which is characterized by the
rate-distortion region of the distributed two-encoder source coding. A
potential energy saving over conventional distributed coding is also
illustrated. This ping-pong distributed coding idea can be extended to the
multiple encoder case and provides the theoretical foundation for a new class
of distributed image coding method in wireless scenarios.Comment: In Proceedings of the 48th Annual Allerton Conference on
Communication, Control and Computing, University of Illinois, Monticello, IL,
September 29 - October 1, 201
Joint Reconstruction of Multi-view Compressed Images
The distributed representation of correlated multi-view images is an
important problem that arise in vision sensor networks. This paper concentrates
on the joint reconstruction problem where the distributively compressed
correlated images are jointly decoded in order to improve the reconstruction
quality of all the compressed images. We consider a scenario where the images
captured at different viewpoints are encoded independently using common coding
solutions (e.g., JPEG, H.264 intra) with a balanced rate distribution among
different cameras. A central decoder first estimates the underlying correlation
model from the independently compressed images which will be used for the joint
signal recovery. The joint reconstruction is then cast as a constrained convex
optimization problem that reconstructs total-variation (TV) smooth images that
comply with the estimated correlation model. At the same time, we add
constraints that force the reconstructed images to be consistent with their
compressed versions. We show by experiments that the proposed joint
reconstruction scheme outperforms independent reconstruction in terms of image
quality, for a given target bit rate. In addition, the decoding performance of
our proposed algorithm compares advantageously to state-of-the-art distributed
coding schemes based on disparity learning and on the DISCOVER
Approximate Decoding Approaches for Network Coded Correlated Data
This paper considers a framework where data from correlated sources are
transmitted with help of network coding in ad-hoc network topologies. The
correlated data are encoded independently at sensors and network coding is
employed in the intermediate nodes in order to improve the data delivery
performance. In such settings, we focus on the problem of reconstructing the
sources at decoder when perfect decoding is not possible due to losses or
bandwidth bottlenecks. We first show that the source data similarity can be
used at decoder to permit decoding based on a novel and simple approximate
decoding scheme. We analyze the influence of the network coding parameters and
in particular the size of finite coding fields on the decoding performance. We
further determine the optimal field size that maximizes the expected decoding
performance as a trade-off between information loss incurred by limiting the
resolution of the source data and the error probability in the reconstructed
data. Moreover, we show that the performance of the approximate decoding
improves when the accuracy of the source model increases even with simple
approximate decoding techniques. We provide illustrative examples about the
possible of our algorithms that can be deployed in sensor networks and
distributed imaging applications. In both cases, the experimental results
confirm the validity of our analysis and demonstrate the benefits of our low
complexity solution for delivery of correlated data sources
- …