8,152 research outputs found
Invariant set of weight of perceptron trained by perceptron training algorithm
In this paper, an invariant set of the weight of the perceptron trained by the perceptron training algorithm is defined and characterized. The dynamic range of the steady state values of the weight of the perceptron can be evaluated via finding the dynamic range of the weight of the perceptron inside the largest invariant set. Also, the necessary and sufficient condition for the forward dynamics of the weight of the perceptron to be injective as well as the condition for the invariant set of the weight of the perceptron to be attractive is derived
Herding as a Learning System with Edge-of-Chaos Dynamics
Herding defines a deterministic dynamical system at the edge of chaos. It
generates a sequence of model states and parameters by alternating parameter
perturbations with state maximizations, where the sequence of states can be
interpreted as "samples" from an associated MRF model. Herding differs from
maximum likelihood estimation in that the sequence of parameters does not
converge to a fixed point and differs from an MCMC posterior sampling approach
in that the sequence of states is generated deterministically. Herding may be
interpreted as a"perturb and map" method where the parameter perturbations are
generated using a deterministic nonlinear dynamical system rather than randomly
from a Gumbel distribution. This chapter studies the distinct statistical
characteristics of the herding algorithm and shows that the fast convergence
rate of the controlled moments may be attributed to edge of chaos dynamics. The
herding algorithm can also be generalized to models with latent variables and
to a discriminative learning setting. The perceptron cycling theorem ensures
that the fast moment matching property is preserved in the more general
framework
Finite size scaling of the bayesian perceptron
We study numerically the properties of the bayesian perceptron through a
gradient descent on the optimal cost function. The theoretical distribution of
stabilities is deduced. It predicts that the optimal generalizer lies close to
the boundary of the space of (error-free) solutions. The numerical simulations
are in good agreement with the theoretical distribution. The extrapolation of
the generalization error to infinite input space size agrees with the
theoretical results. Finite size corrections are negative and exhibit two
different scaling regimes, depending on the training set size. The variance of
the generalization error vanishes for confirming the
property of self-averaging.Comment: RevTeX, 7 pages, 7 figures, submitted to Phys. Rev.
- âŠ