169,484 research outputs found
DRASIC: Distributed Recurrent Autoencoder for Scalable Image Compression
We propose a new architecture for distributed image compression from a group
of distributed data sources. The work is motivated by practical needs of
data-driven codec design, low power consumption, robustness, and data privacy.
The proposed architecture, which we refer to as Distributed Recurrent
Autoencoder for Scalable Image Compression (DRASIC), is able to train
distributed encoders and one joint decoder on correlated data sources. Its
compression capability is much better than the method of training codecs
separately. Meanwhile, the performance of our distributed system with 10
distributed sources is only within 2 dB peak signal-to-noise ratio (PSNR) of
the performance of a single codec trained with all data sources. We experiment
distributed sources with different correlations and show how our data-driven
methodology well matches the Slepian-Wolf Theorem in Distributed Source Coding
(DSC). To the best of our knowledge, this is the first data-driven DSC
framework for general distributed code design with deep learning
Restricted Recurrent Neural Networks
Recurrent Neural Network (RNN) and its variations such as Long Short-Term
Memory (LSTM) and Gated Recurrent Unit (GRU), have become standard building
blocks for learning online data of sequential nature in many research areas,
including natural language processing and speech data analysis. In this paper,
we present a new methodology to significantly reduce the number of parameters
in RNNs while maintaining performance that is comparable or even better than
classical RNNs. The new proposal, referred to as Restricted Recurrent Neural
Network (RRNN), restricts the weight matrices corresponding to the input data
and hidden states at each time step to share a large proportion of parameters.
The new architecture can be regarded as a compression of its classical
counterpart, but it does not require pre-training or sophisticated parameter
fine-tuning, both of which are major issues in most existing compression
techniques. Experiments on natural language modeling show that compared with
its classical counterpart, the restricted recurrent architecture generally
produces comparable results at about 50\% compression rate. In particular, the
Restricted LSTM can outperform classical RNN with even less number of
parameters
- …