396 research outputs found
Phase Transitions of Neural Networks
The cooperative behaviour of interacting neurons and synapses is studied
using models and methods from statistical physics. The competition between
training error and entropy may lead to discontinuous properties of the neural
network. This is demonstrated for a few examples: Perceptron, associative
memory, learning from examples, generalization, multilayer networks, structure
recognition, Bayesian estimate, on-line training, noise estimation and time
series generation.Comment: Plenary talk for MINERVA workshop on mesoscopics, fractals and neural
networks, Eilat, March 1997 Postscript Fil
Retarded Learning: Rigorous Results from Statistical Mechanics
We study learning of probability distributions characterized by an unknown
symmetry direction. Based on an entropic performance measure and the
variational method of statistical mechanics we develop exact upper and lower
bounds on the scaled critical number of examples below which learning of the
direction is impossible. The asymptotic tightness of the bounds suggests an
asymptotically optimal method for learning nonsmooth distributions.Comment: 8 pages, 1 figur
Efficient statistical inference for stochastic reaction processes
We address the problem of estimating unknown model parameters and state
variables in stochastic reaction processes when only sparse and noisy
measurements are available. Using an asymptotic system size expansion for the
backward equation we derive an efficient approximation for this problem. We
demonstrate the validity of our approach on model systems and generalize our
method to the case when some state variables are not observed.Comment: 4 pages, 2 figures, 2 tables; typos corrected, remark about Kalman
smoother adde
Statistical Mechanics of Learning in the Presence of Outliers
Using methods of statistical mechanics, we analyse the effect of outliers on
the supervised learning of a classification problem. The learning strategy aims
at selecting informative examples and discarding outliers. We compare two
algorithms which perform the selection either in a soft or a hard way. When the
fraction of outliers grows large, the estimation errors undergo a first order
phase transition.Comment: 24 pages, 7 figures (minor extensions added
Field Theoretical Analysis of On-line Learning of Probability Distributions
On-line learning of probability distributions is analyzed from the field
theoretical point of view. We can obtain an optimal on-line learning algorithm,
since renormalization group enables us to control the number of degrees of
freedom of a system according to the number of examples. We do not learn
parameters of a model, but probability distributions themselves. Therefore, the
algorithm requires no a priori knowledge of a model.Comment: 4 pages, 1 figure, RevTe
Statistical mechanics of random two-player games
Using methods from the statistical mechanics of disordered systems we analyze
the properties of bimatrix games with random payoffs in the limit where the
number of pure strategies of each player tends to infinity. We analytically
calculate quantities such as the number of equilibrium points, the expected
payoff, and the fraction of strategies played with non-zero probability as a
function of the correlation between the payoff matrices of both players and
compare the results with numerical simulations.Comment: 16 pages, 6 figures, for further information see
http://itp.nat.uni-magdeburg.de/~jberg/games.htm
On-Line AdaTron Learning of Unlearnable Rules
We study the on-line AdaTron learning of linearly non-separable rules by a
simple perceptron. Training examples are provided by a perceptron with a
non-monotonic transfer function which reduces to the usual monotonic relation
in a certain limit. We find that, although the on-line AdaTron learning is a
powerful algorithm for the learnable rule, it does not give the best possible
generalization error for unlearnable problems. Optimization of the learning
rate is shown to greatly improve the performance of the AdaTron algorithm,
leading to the best possible generalization error for a wide range of the
parameter which controls the shape of the transfer function.)Comment: RevTeX 17 pages, 8 figures, to appear in Phys.Rev.
On-line learning of non-monotonic rules by simple perceptron
We study the generalization ability of a simple perceptron which learns
unlearnable rules. The rules are presented by a teacher perceptron with a
non-monotonic transfer function. The student is trained in the on-line mode.
The asymptotic behaviour of the generalization error is estimated under various
conditions. Several learning strategies are proposed and improved to obtain the
theoretical lower bound of the generalization error.Comment: LaTeX 20 pages using IOP LaTeX preprint style file, 14 figure
Interprofessional Health Team Communication About Hospital Discharge: An Implementation Science Evaluation Study
The Consolidated Framework for Implementation Research guided formative evaluation of the implementation of a redesigned interprofessional team rounding process. The purpose of the redesigned process was to improve health team communication about hospital discharge. Themes emerging from interviews of patients, nurses, and providers revealed the inherent value and positive characteristics of the new process, but also workflow, team hierarchy, and process challenges to successful implementation. The evaluation identified actionable recommendations for modifying the implementation process
Thermal Equilibrium with the Wiener Potential: Testing the Replica Variational Approximation
We consider the statistical mechanics of a classical particle in a
one-dimensional box subjected to a random potential which constitutes a Wiener
process on the coordinate axis. The distribution of the free energy and all
correlation functions of the Gibbs states may be calculated exactly as a
function of the box length and temperature. This allows for a detailed test of
results obtained by the replica variational approximation scheme. We show that
this scheme provides a reasonable estimate of the averaged free energy.
Furthermore our results shed more light on the validity of the concept of
approximate ultrametricity which is a central assumption of the replica
variational method.Comment: 6 pages, 1 file LaTeX2e generating 2 eps-files for 2 figures
automaticall
- …