15,400 research outputs found
Learning to Select Pre-Trained Deep Representations with Bayesian Evidence Framework
We propose a Bayesian evidence framework to facilitate transfer learning from
pre-trained deep convolutional neural networks (CNNs). Our framework is
formulated on top of a least squares SVM (LS-SVM) classifier, which is simple
and fast in both training and testing, and achieves competitive performance in
practice. The regularization parameters in LS-SVM is estimated automatically
without grid search and cross-validation by maximizing evidence, which is a
useful measure to select the best performing CNN out of multiple candidates for
transfer learning; the evidence is optimized efficiently by employing Aitken's
delta-squared process, which accelerates convergence of fixed point update. The
proposed Bayesian evidence framework also provides a good solution to identify
the best ensemble of heterogeneous CNNs through a greedy algorithm. Our
Bayesian evidence framework for transfer learning is tested on 12 visual
recognition datasets and illustrates the state-of-the-art performance
consistently in terms of prediction accuracy and modeling efficiency.Comment: Appearing in CVPR-2016 (oral presentation
Non-Vacuous Generalization Bounds at the ImageNet Scale: A PAC-Bayesian Compression Approach
Modern neural networks are highly overparameterized, with capacity to
substantially overfit to training data. Nevertheless, these networks often
generalize well in practice. It has also been observed that trained networks
can often be "compressed" to much smaller representations. The purpose of this
paper is to connect these two empirical observations. Our main technical result
is a generalization bound for compressed networks based on the compressed size.
Combined with off-the-shelf compression algorithms, the bound leads to state of
the art generalization guarantees; in particular, we provide the first
non-vacuous generalization guarantees for realistic architectures applied to
the ImageNet classification problem. As additional evidence connecting
compression and generalization, we show that compressibility of models that
tend to overfit is limited: We establish an absolute limit on expected
compressibility as a function of expected generalization error, where the
expectations are over the random choice of training examples. The bounds are
complemented by empirical results that show an increase in overfitting implies
an increase in the number of bits required to describe a trained network.Comment: 16 pages, 1 figure. Accepted at ICLR 201
BBQ-Networks: Efficient Exploration in Deep Reinforcement Learning for Task-Oriented Dialogue Systems
We present a new algorithm that significantly improves the efficiency of
exploration for deep Q-learning agents in dialogue systems. Our agents explore
via Thompson sampling, drawing Monte Carlo samples from a Bayes-by-Backprop
neural network. Our algorithm learns much faster than common exploration
strategies such as -greedy, Boltzmann, bootstrapping, and
intrinsic-reward-based ones. Additionally, we show that spiking the replay
buffer with experiences from just a few successful episodes can make Q-learning
feasible when it might otherwise fail.Comment: 13 pages, 9 figure
Recurrent Latent Variable Networks for Session-Based Recommendation
In this work, we attempt to ameliorate the impact of data sparsity in the
context of session-based recommendation. Specifically, we seek to devise a
machine learning mechanism capable of extracting subtle and complex underlying
temporal dynamics in the observed session data, so as to inform the
recommendation algorithm. To this end, we improve upon systems that utilize
deep learning techniques with recurrently connected units; we do so by adopting
concepts from the field of Bayesian statistics, namely variational inference.
Our proposed approach consists in treating the network recurrent units as
stochastic latent variables with a prior distribution imposed over them. On
this basis, we proceed to infer corresponding posteriors; these can be used for
prediction and recommendation generation, in a way that accounts for the
uncertainty in the available sparse training data. To allow for our approach to
easily scale to large real-world datasets, we perform inference under an
approximate amortized variational inference (AVI) setup, whereby the learned
posteriors are parameterized via (conventional) neural networks. We perform an
extensive experimental evaluation of our approach using challenging benchmark
datasets, and illustrate its superiority over existing state-of-the-art
techniques
Task adapted reconstruction for inverse problems
The paper considers the problem of performing a task defined on a model
parameter that is only observed indirectly through noisy data in an ill-posed
inverse problem. A key aspect is to formalize the steps of reconstruction and
task as appropriate estimators (non-randomized decision rules) in statistical
estimation problems. The implementation makes use of (deep) neural networks to
provide a differentiable parametrization of the family of estimators for both
steps. These networks are combined and jointly trained against suitable
supervised training data in order to minimize a joint differentiable loss
function, resulting in an end-to-end task adapted reconstruction method. The
suggested framework is generic, yet adaptable, with a plug-and-play structure
for adjusting both the inverse problem and the task at hand. More precisely,
the data model (forward operator and statistical model of the noise) associated
with the inverse problem is exchangeable, e.g., by using neural network
architecture given by a learned iterative method. Furthermore, any task that is
encodable as a trainable neural network can be used. The approach is
demonstrated on joint tomographic image reconstruction, classification and
joint tomographic image reconstruction segmentation
- …