11,865 research outputs found
Rhythmic Representations: Learning Periodic Patterns for Scalable Place Recognition at a Sub-Linear Storage Cost
Robotic and animal mapping systems share many challenges and characteristics:
they must function in a wide variety of environmental conditions, enable the
robot or animal to navigate effectively to find food or shelter, and be
computationally tractable from both a speed and storage perspective. With
regards to map storage, the mammalian brain appears to take a diametrically
opposed approach to all current robotic mapping systems. Where robotic mapping
systems attempt to solve the data association problem to minimise
representational aliasing, neurons in the brain intentionally break data
association by encoding large (potentially unlimited) numbers of places with a
single neuron. In this paper, we propose a novel method based on supervised
learning techniques that seeks out regularly repeating visual patterns in the
environment with mutually complementary co-prime frequencies, and an encoding
scheme that enables storage requirements to grow sub-linearly with the size of
the environment being mapped. To improve robustness in challenging real-world
environments while maintaining storage growth sub-linearity, we incorporate
both multi-exemplar learning and data augmentation techniques. Using large
benchmark robotic mapping datasets, we demonstrate the combined system
achieving high-performance place recognition with sub-linear storage
requirements, and characterize the performance-storage growth trade-off curve.
The work serves as the first robotic mapping system with sub-linear storage
scaling properties, as well as the first large-scale demonstration in
real-world environments of one of the proposed memory benefits of these
neurons.Comment: Pre-print of article that will appear in the IEEE Robotics and
Automation Letter
Deep Learning for Audio Signal Processing
Given the recent surge in developments of deep learning, this article
provides a review of the state-of-the-art deep learning techniques for audio
signal processing. Speech, music, and environmental sound processing are
considered side-by-side, in order to point out similarities and differences
between the domains, highlighting general methods, problems, key references,
and potential for cross-fertilization between areas. The dominant feature
representations (in particular, log-mel spectra and raw waveform) and deep
learning models are reviewed, including convolutional neural networks, variants
of the long short-term memory architecture, as well as more audio-specific
neural network models. Subsequently, prominent deep learning application areas
are covered, i.e. audio recognition (automatic speech recognition, music
information retrieval, environmental sound detection, localization and
tracking) and synthesis and transformation (source separation, audio
enhancement, generative models for speech, sound, and music synthesis).
Finally, key issues and future questions regarding deep learning applied to
audio signal processing are identified.Comment: 15 pages, 2 pdf figure
Sketching for Large-Scale Learning of Mixture Models
Learning parameters from voluminous data can be prohibitive in terms of
memory and computational requirements. We propose a "compressive learning"
framework where we estimate model parameters from a sketch of the training
data. This sketch is a collection of generalized moments of the underlying
probability distribution of the data. It can be computed in a single pass on
the training set, and is easily computable on streams or distributed datasets.
The proposed framework shares similarities with compressive sensing, which aims
at drastically reducing the dimension of high-dimensional signals while
preserving the ability to reconstruct them. To perform the estimation task, we
derive an iterative algorithm analogous to sparse reconstruction algorithms in
the context of linear inverse problems. We exemplify our framework with the
compressive estimation of a Gaussian Mixture Model (GMM), providing heuristics
on the choice of the sketching procedure and theoretical guarantees of
reconstruction. We experimentally show on synthetic data that the proposed
algorithm yields results comparable to the classical Expectation-Maximization
(EM) technique while requiring significantly less memory and fewer computations
when the number of database elements is large. We further demonstrate the
potential of the approach on real large-scale data (over 10 8 training samples)
for the task of model-based speaker verification. Finally, we draw some
connections between the proposed framework and approximate Hilbert space
embedding of probability distributions using random features. We show that the
proposed sketching operator can be seen as an innovative method to design
translation-invariant kernels adapted to the analysis of GMMs. We also use this
theoretical framework to derive information preservation guarantees, in the
spirit of infinite-dimensional compressive sensing
- …