65,050 research outputs found
Lessons from Building Acoustic Models with a Million Hours of Speech
This is a report of our lessons learned building acoustic models from 1
Million hours of unlabeled speech, while labeled speech is restricted to 7,000
hours. We employ student/teacher training on unlabeled data, helping scale out
target generation in comparison to confidence model based methods, which
require a decoder and a confidence model. To optimize storage and to
parallelize target generation, we store high valued logits from the teacher
model. Introducing the notion of scheduled learning, we interleave learning on
unlabeled and labeled data. To scale distributed training across a large number
of GPUs, we use BMUF with 64 GPUs, while performing sequence training only on
labeled data with gradient threshold compression SGD using 16 GPUs. Our
experiments show that extremely large amounts of data are indeed useful; with
little hyper-parameter tuning, we obtain relative WER improvements in the 10 to
20% range, with higher gains in noisier conditions.Comment: "Copyright 2019 IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other works.
t-Exponential Memory Networks for Question-Answering Machines
Recent advances in deep learning have brought to the fore models that can
make multiple computational steps in the service of completing a task; these
are capable of describ- ing long-term dependencies in sequential data. Novel
recurrent attention models over possibly large external memory modules
constitute the core mechanisms that enable these capabilities. Our work
addresses learning subtler and more complex underlying temporal dynamics in
language modeling tasks that deal with sparse sequential data. To this end, we
improve upon these recent advances, by adopting concepts from the field of
Bayesian statistics, namely variational inference. Our proposed approach
consists in treating the network parameters as latent variables with a prior
distribution imposed over them. Our statistical assumptions go beyond the
standard practice of postulating Gaussian priors. Indeed, to allow for handling
outliers, which are prevalent in long observed sequences of multivariate data,
multivariate t-exponential distributions are imposed. On this basis, we proceed
to infer corresponding posteriors; these can be used for inference and
prediction at test time, in a way that accounts for the uncertainty in the
available sparse training data. Specifically, to allow for our approach to best
exploit the merits of the t-exponential family, our method considers a new
t-divergence measure, which generalizes the concept of the Kullback-Leibler
divergence. We perform an extensive experimental evaluation of our approach,
using challenging language modeling benchmarks, and illustrate its superiority
over existing state-of-the-art techniques
Unbounded Human Learning: Optimal Scheduling for Spaced Repetition
In the study of human learning, there is broad evidence that our ability to
retain information improves with repeated exposure and decays with delay since
last exposure. This plays a crucial role in the design of educational software,
leading to a trade-off between teaching new material and reviewing what has
already been taught. A common way to balance this trade-off is spaced
repetition, which uses periodic review of content to improve long-term
retention. Though spaced repetition is widely used in practice, e.g., in
electronic flashcard software, there is little formal understanding of the
design of these systems. Our paper addresses this gap in three ways. First, we
mine log data from spaced repetition software to establish the functional
dependence of retention on reinforcement and delay. Second, we use this memory
model to develop a stochastic model for spaced repetition systems. We propose a
queueing network model of the Leitner system for reviewing flashcards, along
with a heuristic approximation that admits a tractable optimization problem for
review scheduling. Finally, we empirically evaluate our queueing model through
a Mechanical Turk experiment, verifying a key qualitative prediction of our
model: the existence of a sharp phase transition in learning outcomes upon
increasing the rate of new item introductions.Comment: Accepted to the ACM SIGKDD Conference on Knowledge Discovery and Data
Mining 201
Distributed Bayesian Learning with Stochastic Natural-gradient Expectation Propagation and the Posterior Server
This paper makes two contributions to Bayesian machine learning algorithms.
Firstly, we propose stochastic natural gradient expectation propagation (SNEP),
a novel alternative to expectation propagation (EP), a popular variational
inference algorithm. SNEP is a black box variational algorithm, in that it does
not require any simplifying assumptions on the distribution of interest, beyond
the existence of some Monte Carlo sampler for estimating the moments of the EP
tilted distributions. Further, as opposed to EP which has no guarantee of
convergence, SNEP can be shown to be convergent, even when using Monte Carlo
moment estimates. Secondly, we propose a novel architecture for distributed
Bayesian learning which we call the posterior server. The posterior server
allows scalable and robust Bayesian learning in cases where a data set is
stored in a distributed manner across a cluster, with each compute node
containing a disjoint subset of data. An independent Monte Carlo sampler is run
on each compute node, with direct access only to the local data subset, but
which targets an approximation to the global posterior distribution given all
data across the whole cluster. This is achieved by using a distributed
asynchronous implementation of SNEP to pass messages across the cluster. We
demonstrate SNEP and the posterior server on distributed Bayesian learning of
logistic regression and neural networks.
Keywords: Distributed Learning, Large Scale Learning, Deep Learning, Bayesian
Learn- ing, Variational Inference, Expectation Propagation, Stochastic
Approximation, Natural Gradient, Markov chain Monte Carlo, Parameter Server,
Posterior Server.Comment: 37 pages, 7 figure
- …