2,676 research outputs found
Le Cam meets LeCun: Deficiency and Generic Feature Learning
"Deep Learning" methods attempt to learn generic features in an unsupervised
fashion from a large unlabelled data set. These generic features should perform
as well as the best hand crafted features for any learning problem that makes
use of this data. We provide a definition of generic features, characterize
when it is possible to learn them and provide methods closely related to the
autoencoder and deep belief network of deep learning. In order to do so we use
the notion of deficiency and illustrate its value in studying certain general
learning problems.Comment: 25 pages, 2 figure
From Stochastic Mixability to Fast Rates
Empirical risk minimization (ERM) is a fundamental learning rule for
statistical learning problems where the data is generated according to some
unknown distribution and returns a hypothesis chosen from a
fixed class with small loss . In the parametric setting,
depending upon ERM can have slow
or fast rates of convergence of the excess risk as a
function of the sample size . There exist several results that give
sufficient conditions for fast rates in terms of joint properties of ,
, and , such as the margin condition and the Bernstein
condition. In the non-statistical prediction with expert advice setting, there
is an analogous slow and fast rate phenomenon, and it is entirely characterized
in terms of the mixability of the loss (there being no role there for
or ). The notion of stochastic mixability builds a
bridge between these two models of learning, reducing to classical mixability
in a special case. The present paper presents a direct proof of fast rates for
ERM in terms of stochastic mixability of , and
in so doing provides new insight into the fast-rates phenomenon. The proof
exploits an old result of Kemperman on the solution to the general moment
problem. We also show a partial converse that suggests a characterization of
fast rates for ERM in terms of stochastic mixability is possible.Comment: 21 pages, accepted to NIPS 201
Particle Filter Design Using Importance Sampling for Acoustic Source Localisation and Tracking in Reverberant Environments
Sequential Monte Carlo methods have been recently proposed to deal with the problem of acoustic source localisation and tracking using an array of microphones. Previous implementations make use of the basic bootstrap particle filter, whereas a more general approach involves the concept of importance sampling. In this paper, we develop a new particle filter for acoustic source localisation using importance sampling, and compare its tracking ability with that of a bootstrap algorithm proposed previously in the literature. Experimental results obtained with simulated reverberant samples and real audio recordings demonstrate that the new algorithm is more suitable for practical applications due to its reinitialisation capabilities, despite showing a slightly lower average tracking accuracy. A real-time implementation of the algorithm also shows that the proposed particle filter can reliably track a person talking in real reverberant rooms.This paper was performed while Eric A. Lehmann was working
with National ICT Australia. National ICT Australia
is funded by the Australian Government’s Department of
Communications, Information Technology, and the Arts,
the Australian Research Council, through Backing Australia’s
Ability, and the ICT Centre of Excellence programs
- …