1,431 research outputs found
Scalable Recollections for Continual Lifelong Learning
Given the recent success of Deep Learning applied to a variety of single
tasks, it is natural to consider more human-realistic settings. Perhaps the
most difficult of these settings is that of continual lifelong learning, where
the model must learn online over a continuous stream of non-stationary data. A
successful continual lifelong learning system must have three key capabilities:
it must learn and adapt over time, it must not forget what it has learned, and
it must be efficient in both training time and memory. Recent techniques have
focused their efforts primarily on the first two capabilities while questions
of efficiency remain largely unexplored. In this paper, we consider the problem
of efficient and effective storage of experiences over very large time-frames.
In particular we consider the case where typical experiences are O(n) bits and
memories are limited to O(k) bits for k << n. We present a novel scalable
architecture and training algorithm in this challenging domain and provide an
extensive evaluation of its performance. Our results show that we can achieve
considerable gains on top of state-of-the-art methods such as GEM.Comment: AAAI 201
DRASIC: Distributed Recurrent Autoencoder for Scalable Image Compression
We propose a new architecture for distributed image compression from a group
of distributed data sources. The work is motivated by practical needs of
data-driven codec design, low power consumption, robustness, and data privacy.
The proposed architecture, which we refer to as Distributed Recurrent
Autoencoder for Scalable Image Compression (DRASIC), is able to train
distributed encoders and one joint decoder on correlated data sources. Its
compression capability is much better than the method of training codecs
separately. Meanwhile, the performance of our distributed system with 10
distributed sources is only within 2 dB peak signal-to-noise ratio (PSNR) of
the performance of a single codec trained with all data sources. We experiment
distributed sources with different correlations and show how our data-driven
methodology well matches the Slepian-Wolf Theorem in Distributed Source Coding
(DSC). To the best of our knowledge, this is the first data-driven DSC
framework for general distributed code design with deep learning
A Novel Light Field Coding Scheme Based on Deep Belief Network and Weighted Binary Images for Additive Layered Displays
Light field display caters to the viewer's immersive experience by providing
binocular depth sensation and motion parallax. Glasses-free tensor light field
display is becoming a prominent area of research in auto-stereoscopic display
technology. Stacking light attenuating layers is one of the approaches to
implement a light field display with a good depth of field, wide viewing angles
and high resolution. This paper presents a compact and efficient representation
of light field data based on scalable compression of the binary represented
image layers suitable for additive layered display using a Deep Belief Network
(DBN). The proposed scheme learns and optimizes the additive layer patterns
using a convolutional neural network (CNN). Weighted binary images represent
the optimized patterns, reducing the file size and introducing scalable
encoding. The DBN further compresses the weighted binary patterns into a latent
space representation followed by encoding the latent data using an h.254 codec.
The proposed scheme is compared with benchmark codecs such as h.264 and h.265
and achieved competitive performance on light field data
- …