1,978 research outputs found
Nonconventional averages along arithmetic progressions and lattice spin systems
We study the so-called nonconventional averages in the context of lattice
spin systems, or equivalently random colourings of the integers. For i.i.d.
colourings, we prove a large deviation principle for the number of
monochromatic arithmetic progressions of size two in the box , as
, with an explicit rate function related to the one-dimensional
Ising model. For more general colourings, we prove some bounds for the number
of monochromatic arithmetic progressions of arbitrary size, as well as for the
maximal progression inside the box . Finally, we relate
nonconventional sums along arithmetic progressions of size greater than two to
statistical mechanics models in dimension larger than one.Comment: 18 pages, 3 figures. A new section was added on arithmetic
progressions of length larger than 2 and statistical mechanics models on Z^d,
d>1. To appear in Indagationes Mathematicae (2012
Emergence of Compositional Representations in Restricted Boltzmann Machines
Extracting automatically the complex set of features composing real
high-dimensional data is crucial for achieving high performance in
machine--learning tasks. Restricted Boltzmann Machines (RBM) are empirically
known to be efficient for this purpose, and to be able to generate distributed
and graded representations of the data. We characterize the structural
conditions (sparsity of the weights, low effective temperature, nonlinearities
in the activation functions of hidden units, and adaptation of fields
maintaining the activity in the visible layer) allowing RBM to operate in such
a compositional phase. Evidence is provided by the replica analysis of an
adequate statistical ensemble of random RBMs and by RBM trained on the
handwritten digits dataset MNIST.Comment: Supplementary material available at the authors' webpag
A Deterministic and Generalized Framework for Unsupervised Learning with Restricted Boltzmann Machines
Restricted Boltzmann machines (RBMs) are energy-based neural-networks which
are commonly used as the building blocks for deep architectures neural
architectures. In this work, we derive a deterministic framework for the
training, evaluation, and use of RBMs based upon the Thouless-Anderson-Palmer
(TAP) mean-field approximation of widely-connected systems with weak
interactions coming from spin-glass theory. While the TAP approach has been
extensively studied for fully-visible binary spin systems, our construction is
generalized to latent-variable models, as well as to arbitrarily distributed
real-valued spin systems with bounded support. In our numerical experiments, we
demonstrate the effective deterministic training of our proposed models and are
able to show interesting features of unsupervised learning which could not be
directly observed with sampling. Additionally, we demonstrate how to utilize
our TAP-based framework for leveraging trained RBMs as joint priors in
denoising problems
Bayesian off-line detection of multiple change-points corrupted by multiplicative noise : application to SAR image edge detection
This paper addresses the problem of Bayesian off-line change-point detection in synthetic aperture radar images. The minimum mean square error and maximum a posteriori estimators of the changepoint positions are studied. Both estimators cannot be implemented because of optimization or integration problems. A practical implementation using Markov chain Monte Carlo methods is proposed. This implementation requires a priori knowledge of the so-called hyperparameters. A hyperparameter estimation procedure is proposed that alleviates the requirement of knowing the values of the hyperparameters. Simulation results on synthetic signals and synthetic aperture radar images are presented
Parameters estimation for spatio-temporal maximum entropy distributions: application to neural spike trains
We propose a numerical method to learn Maximum Entropy (MaxEnt) distributions
with spatio-temporal constraints from experimental spike trains. This is an
extension of two papers [10] and [4] who proposed the estimation of parameters
where only spatial constraints were taken into account. The extension we
propose allows to properly handle memory effects in spike statistics, for large
sized neural networks.Comment: 34 pages, 33 figure
- …