11,418 research outputs found
Stochastic expansions using continuous dictionaries: L\'{e}vy adaptive regression kernels
This article describes a new class of prior distributions for nonparametric
function estimation. The unknown function is modeled as a limit of weighted
sums of kernels or generator functions indexed by continuous parameters that
control local and global features such as their translation, dilation,
modulation and shape. L\'{e}vy random fields and their stochastic integrals are
employed to induce prior distributions for the unknown functions or,
equivalently, for the number of kernels and for the parameters governing their
features. Scaling, shape, and other features of the generating functions are
location-specific to allow quite different function properties in different
parts of the space, as with wavelet bases and other methods employing
overcomplete dictionaries. We provide conditions under which the stochastic
expansions converge in specified Besov or Sobolev norms. Under a Gaussian error
model, this may be viewed as a sparse regression problem, with regularization
induced via the L\'{e}vy random field prior distribution. Posterior inference
for the unknown functions is based on a reversible jump Markov chain Monte
Carlo algorithm. We compare the L\'{e}vy Adaptive Regression Kernel (LARK)
method to wavelet-based methods using some of the standard test functions, and
illustrate its flexibility and adaptability in nonstationary applications.Comment: Published in at http://dx.doi.org/10.1214/11-AOS889 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Idealized computational models for auditory receptive fields
This paper presents a theory by which idealized models of auditory receptive
fields can be derived in a principled axiomatic manner, from a set of
structural properties to enable invariance of receptive field responses under
natural sound transformations and ensure internal consistency between
spectro-temporal receptive fields at different temporal and spectral scales.
For defining a time-frequency transformation of a purely temporal sound
signal, it is shown that the framework allows for a new way of deriving the
Gabor and Gammatone filters as well as a novel family of generalized Gammatone
filters, with additional degrees of freedom to obtain different trade-offs
between the spectral selectivity and the temporal delay of time-causal temporal
window functions.
When applied to the definition of a second-layer of receptive fields from a
spectrogram, it is shown that the framework leads to two canonical families of
spectro-temporal receptive fields, in terms of spectro-temporal derivatives of
either spectro-temporal Gaussian kernels for non-causal time or the combination
of a time-causal generalized Gammatone filter over the temporal domain and a
Gaussian filter over the logspectral domain. For each filter family, the
spectro-temporal receptive fields can be either separable over the
time-frequency domain or be adapted to local glissando transformations that
represent variations in logarithmic frequencies over time. Within each domain
of either non-causal or time-causal time, these receptive field families are
derived by uniqueness from the assumptions.
It is demonstrated how the presented framework allows for computation of
basic auditory features for audio processing and that it leads to predictions
about auditory receptive fields with good qualitative similarity to biological
receptive fields measured in the inferior colliculus (ICC) and primary auditory
cortex (A1) of mammals.Comment: 55 pages, 22 figures, 3 table
Characteristic Kernels and Infinitely Divisible Distributions
We connect shift-invariant characteristic kernels to infinitely divisible
distributions on . Characteristic kernels play an important
role in machine learning applications with their kernel means to distinguish
any two probability measures. The contribution of this paper is two-fold.
First, we show, using the L\'evy-Khintchine formula, that any shift-invariant
kernel given by a bounded, continuous and symmetric probability density
function (pdf) of an infinitely divisible distribution on is
characteristic. We also present some closure property of such characteristic
kernels under addition, pointwise product, and convolution. Second, in
developing various kernel mean algorithms, it is fundamental to compute the
following values: (i) kernel mean values , , and
(ii) kernel mean RKHS inner products , for probability measures . If , and
kernel are Gaussians, then computation (i) and (ii) results in Gaussian
pdfs that is tractable. We generalize this Gaussian combination to more general
cases in the class of infinitely divisible distributions. We then introduce a
{\it conjugate} kernel and {\it convolution trick}, so that the above (i) and
(ii) have the same pdf form, expecting tractable computation at least in some
cases. As specific instances, we explore -stable distributions and a
rich class of generalized hyperbolic distributions, where the Laplace, Cauchy
and Student-t distributions are included
Invariance of visual operations at the level of receptive fields
Receptive field profiles registered by cell recordings have shown that
mammalian vision has developed receptive fields tuned to different sizes and
orientations in the image domain as well as to different image velocities in
space-time. This article presents a theoretical model by which families of
idealized receptive field profiles can be derived mathematically from a small
set of basic assumptions that correspond to structural properties of the
environment. The article also presents a theory for how basic invariance
properties to variations in scale, viewing direction and relative motion can be
obtained from the output of such receptive fields, using complementary
selection mechanisms that operate over the output of families of receptive
fields tuned to different parameters. Thereby, the theory shows how basic
invariance properties of a visual system can be obtained already at the level
of receptive fields, and we can explain the different shapes of receptive field
profiles found in biological vision from a requirement that the visual system
should be invariant to the natural types of image transformations that occur in
its environment.Comment: 40 pages, 17 figure
Regularity for Subelliptic PDE Through Uniform Estimates in Multi-Scale Geometries
We aim at reviewing and extending a number of recent results addressing
stability of certain geometric and analytic estimates in the Riemannian
approximation of subRiemannian structures. In particular we extend the recent
work of the the authors with Rea [19] and Manfredini [17] concerning stability
of doubling properties, Poincar\'e inequalities, Gaussian estimates on heat
kernels and Schauder estimates from the Carnot group setting to the general
case of H\"ormander vector fields
Quasar microlensing light curve analysis using deep machine learning
We introduce a deep machine learning approach to studying quasar microlensing
light curves for the first time by analyzing hundreds of thousands of simulated
light curves with respect to the accretion disc size and temperature profile.
Our results indicate that it is possible to successfully classify very large
numbers of diverse light curve data and measure the accretion disc structure.
The detailed shape of the accretion disc brightness profile is found to play a
negligible role, in agreement with Mortonson et al. (2005). The speed and
efficiency of our deep machine learning approach is ideal for quantifying
physical properties in a `big-data' problem setup. This proposed approach looks
promising for analyzing decade-long light curves for thousands of microlensed
quasars, expected to be provided by the Large Synoptic Survey Telescope.Comment: 11 pages, 7 figures, accepted for publication in MNRA
- …