10,183 research outputs found
Analysis, Visualization, and Transformation of Audio Signals Using Dictionary-based Methods
date-added: 2014-01-07 09:15:58 +0000 date-modified: 2014-01-07 09:15:58 +0000date-added: 2014-01-07 09:15:58 +0000 date-modified: 2014-01-07 09:15:58 +000
Learning Active Basis Models by EM-Type Algorithms
EM algorithm is a convenient tool for maximum likelihood model fitting when
the data are incomplete or when there are latent variables or hidden states. In
this review article we explain that EM algorithm is a natural computational
scheme for learning image templates of object categories where the learning is
not fully supervised. We represent an image template by an active basis model,
which is a linear composition of a selected set of localized, elongated and
oriented wavelet elements that are allowed to slightly perturb their locations
and orientations to account for the deformations of object shapes. The model
can be easily learned when the objects in the training images are of the same
pose, and appear at the same location and scale. This is often called
supervised learning. In the situation where the objects may appear at different
unknown locations, orientations and scales in the training images, we have to
incorporate the unknown locations, orientations and scales as latent variables
into the image generation process, and learn the template by EM-type
algorithms. The E-step imputes the unknown locations, orientations and scales
based on the currently learned template. This step can be considered
self-supervision, which involves using the current template to recognize the
objects in the training images. The M-step then relearns the template based on
the imputed locations, orientations and scales, and this is essentially the
same as supervised learning. So the EM learning process iterates between
recognition and supervised learning. We illustrate this scheme by several
experiments.Comment: Published in at http://dx.doi.org/10.1214/09-STS281 the Statistical
Science (http://www.imstat.org/sts/) by the Institute of Mathematical
Statistics (http://www.imstat.org
An optimally concentrated Gabor transform for localized time-frequency components
Gabor analysis is one of the most common instances of time-frequency signal
analysis. Choosing a suitable window for the Gabor transform of a signal is
often a challenge for practical applications, in particular in audio signal
processing. Many time-frequency (TF) patterns of different shapes may be
present in a signal and they can not all be sparsely represented in the same
spectrogram. We propose several algorithms, which provide optimal windows for a
user-selected TF pattern with respect to different concentration criteria. We
base our optimization algorithm on -norms as measure of TF spreading. For
a given number of sampling points in the TF plane we also propose optimal
lattices to be used with the obtained windows. We illustrate the potentiality
of the method on selected numerical examples
- …