532 research outputs found

    Signal and Noise in Correlation Matrix

    Full text link
    Using random matrix technique we determine an exact relation between the eigenvalue spectrum of the covariance matrix and of its estimator. This relation can be used in practice to compute eigenvalue invariants of the covariance (correlation) matrix. Results can be applied in various problems where one experimentally estimates correlations in a system with many degrees of freedom, like in statistical physics, lattice measurements of field theory, genetics, quantitative finance and other applications of multivariate statistics.Comment: 17 pages, 3 figures, corrected typos, revtex style changed to elsar

    Generalization properties of finite size polynomial Support Vector Machines

    Full text link
    The learning properties of finite size polynomial Support Vector Machines are analyzed in the case of realizable classification tasks. The normalization of the high order features acts as a squeezing factor, introducing a strong anisotropy in the patterns distribution in feature space. As a function of the training set size, the corresponding generalization error presents a crossover, more or less abrupt depending on the distribution's anisotropy and on the task to be learned, between a fast-decreasing and a slowly decreasing regime. This behaviour corresponds to the stepwise decrease found by Dietrich et al.[Phys. Rev. Lett. 82 (1999) 2975-2978] in the thermodynamic limit. The theoretical results are in excellent agreement with the numerical simulations.Comment: 12 pages, 7 figure

    Field Theoretical Analysis of On-line Learning of Probability Distributions

    Full text link
    On-line learning of probability distributions is analyzed from the field theoretical point of view. We can obtain an optimal on-line learning algorithm, since renormalization group enables us to control the number of degrees of freedom of a system according to the number of examples. We do not learn parameters of a model, but probability distributions themselves. Therefore, the algorithm requires no a priori knowledge of a model.Comment: 4 pages, 1 figure, RevTe

    Statistical Mechanics of Learning in the Presence of Outliers

    Full text link
    Using methods of statistical mechanics, we analyse the effect of outliers on the supervised learning of a classification problem. The learning strategy aims at selecting informative examples and discarding outliers. We compare two algorithms which perform the selection either in a soft or a hard way. When the fraction of outliers grows large, the estimation errors undergo a first order phase transition.Comment: 24 pages, 7 figures (minor extensions added

    Retarded Learning: Rigorous Results from Statistical Mechanics

    Full text link
    We study learning of probability distributions characterized by an unknown symmetry direction. Based on an entropic performance measure and the variational method of statistical mechanics we develop exact upper and lower bounds on the scaled critical number of examples below which learning of the direction is impossible. The asymptotic tightness of the bounds suggests an asymptotically optimal method for learning nonsmooth distributions.Comment: 8 pages, 1 figur

    Statistical mechanics of random two-player games

    Full text link
    Using methods from the statistical mechanics of disordered systems we analyze the properties of bimatrix games with random payoffs in the limit where the number of pure strategies of each player tends to infinity. We analytically calculate quantities such as the number of equilibrium points, the expected payoff, and the fraction of strategies played with non-zero probability as a function of the correlation between the payoff matrices of both players and compare the results with numerical simulations.Comment: 16 pages, 6 figures, for further information see http://itp.nat.uni-magdeburg.de/~jberg/games.htm

    Efficient statistical inference for stochastic reaction processes

    Full text link
    We address the problem of estimating unknown model parameters and state variables in stochastic reaction processes when only sparse and noisy measurements are available. Using an asymptotic system size expansion for the backward equation we derive an efficient approximation for this problem. We demonstrate the validity of our approach on model systems and generalize our method to the case when some state variables are not observed.Comment: 4 pages, 2 figures, 2 tables; typos corrected, remark about Kalman smoother adde

    Analysis of ensemble learning using simple perceptrons based on online learning theory

    Full text link
    Ensemble learning of KK nonlinear perceptrons, which determine their outputs by sign functions, is discussed within the framework of online learning and statistical mechanics. One purpose of statistical learning theory is to theoretically obtain the generalization error. This paper shows that ensemble generalization error can be calculated by using two order parameters, that is, the similarity between a teacher and a student, and the similarity among students. The differential equations that describe the dynamical behaviors of these order parameters are derived in the case of general learning rules. The concrete forms of these differential equations are derived analytically in the cases of three well-known rules: Hebbian learning, perceptron learning and AdaTron learning. Ensemble generalization errors of these three rules are calculated by using the results determined by solving their differential equations. As a result, these three rules show different characteristics in their affinity for ensemble learning, that is ``maintaining variety among students." Results show that AdaTron learning is superior to the other two rules with respect to that affinity.Comment: 30 pages, 17 figure

    On-line learning of non-monotonic rules by simple perceptron

    Full text link
    We study the generalization ability of a simple perceptron which learns unlearnable rules. The rules are presented by a teacher perceptron with a non-monotonic transfer function. The student is trained in the on-line mode. The asymptotic behaviour of the generalization error is estimated under various conditions. Several learning strategies are proposed and improved to obtain the theoretical lower bound of the generalization error.Comment: LaTeX 20 pages using IOP LaTeX preprint style file, 14 figure

    Thermal Equilibrium with the Wiener Potential: Testing the Replica Variational Approximation

    Full text link
    We consider the statistical mechanics of a classical particle in a one-dimensional box subjected to a random potential which constitutes a Wiener process on the coordinate axis. The distribution of the free energy and all correlation functions of the Gibbs states may be calculated exactly as a function of the box length and temperature. This allows for a detailed test of results obtained by the replica variational approximation scheme. We show that this scheme provides a reasonable estimate of the averaged free energy. Furthermore our results shed more light on the validity of the concept of approximate ultrametricity which is a central assumption of the replica variational method.Comment: 6 pages, 1 file LaTeX2e generating 2 eps-files for 2 figures automaticall
    corecore