359 research outputs found

    Thermodynamic properties of extremely diluted symmetric Q-Ising neural networks

    Full text link
    Using the replica-symmetric mean-field theory approach the thermodynamic and retrieval properties of extremely diluted {\it symmetric} QQ-Ising neural networks are studied. In particular, capacity-gain parameter and capacity-temperature phase diagrams are derived for Q=3,4Q=3, 4 and Q=Q=\infty. The zero-temperature results are compared with those obtained from a study of the dynamics of the model. Furthermore, the de Almeida-Thouless line is determined. Where appropriate, the difference with other QQ-Ising architectures is outlined.Comment: 16 pages Latex including 6 eps-figures. Corrections, also in most of the figures have been mad

    The Little-Hopfield model on a Random Graph

    Full text link
    We study the Hopfield model on a random graph in scaling regimes where the average number of connections per neuron is a finite number and where the spin dynamics is governed by a synchronous execution of the microscopic update rule (Little-Hopfield model).We solve this model within replica symmetry and by using bifurcation analysis we prove that the spin-glass/paramagnetic and the retrieval/paramagnetictransition lines of our phase diagram are identical to those of sequential dynamics.The first-order retrieval/spin-glass transition line follows by direct evaluation of our observables using population dynamics. Within the accuracy of numerical precision and for sufficiently small values of the connectivity parameter we find that this line coincides with the corresponding sequential one. Comparison with simulation experiments shows excellent agreement.Comment: 14 pages, 4 figure

    On the conditions for the existence of Perfect Learning and power law in learning from stochastic examples by Ising perceptrons

    Full text link
    In a previous letter, we studied learning from stochastic examples by perceptrons with Ising weights in the framework of statistical mechanics. Under the one-step replica symmetry breaking ansatz, the behaviours of learning curves were classified according to some local property of the rules by which examples were drawn. Further, the conditions for the existence of the Perfect Learning together with other behaviors of the learning curves were given. In this paper, we give the detailed derivation about these results and further argument about the Perfect Learning together with extensive numerical calculations.Comment: 28 pages, 43 figures. Submitted to J. Phys.

    Statistical Mechanics of Learning in the Presence of Outliers

    Full text link
    Using methods of statistical mechanics, we analyse the effect of outliers on the supervised learning of a classification problem. The learning strategy aims at selecting informative examples and discarding outliers. We compare two algorithms which perform the selection either in a soft or a hard way. When the fraction of outliers grows large, the estimation errors undergo a first order phase transition.Comment: 24 pages, 7 figures (minor extensions added

    Statistical Mechanics of Support Vector Networks

    Get PDF
    Using methods of Statistical Physics, we investigate the generalization performance of support vector machines (SVMs), which have been recently introduced as a general alternative to neural networks. For nonlinear classification rules, the generalization error saturates on a plateau, when the number of examples is too small to properly estimate the coefficients of the nonlinear part. When trained on simple rules, we find that SVMs overfit only weakly. The performance of SVMs is strongly enhanced, when the distribution of the inputs has a gap in feature space.Comment: REVTeX, 4 pages, 2 figures, accepted by Phys. Rev. Lett (typos corrected

    Fixed Points of Hopfield Type Neural Networks

    Full text link
    The set of the fixed points of the Hopfield type network is under investigation. The connection matrix of the network is constructed according to the Hebb rule from the set of memorized patterns which are treated as distorted copies of the standard-vector. It is found that the dependence of the set of the fixed points on the value of the distortion parameter can be described analytically. The obtained results are interpreted in the terms of neural networks and the Ising model.Comment: RevTEX, 19 pages, 2 Postscript figures, the full version of the earler brief report (cond-mat/9901251

    Correlated patterns in non-monotonic graded-response perceptrons

    Full text link
    The optimal capacity of graded-response perceptrons storing biased and spatially correlated patterns with non-monotonic input-output relations is studied. It is shown that only the structure of the output patterns is important for the overall performance of the perceptrons.Comment: 4 pages, 4 figure

    Phase transitions in optimal unsupervised learning

    Full text link
    We determine the optimal performance of learning the orientation of the symmetry axis of a set of P = alpha N points that are uniformly distributed in all the directions but one on the N-dimensional sphere. The components along the symmetry breaking direction, of unitary vector B, are sampled from a mixture of two gaussians of variable separation and width. The typical optimal performance is measured through the overlap Ropt=B.J* where J* is the optimal guess of the symmetry breaking direction. Within this general scenario, the learning curves Ropt(alpha) may present first order transitions if the clusters are narrow enough. Close to these transitions, high performance states can be obtained through the minimization of the corresponding optimal potential, although these solutions are metastable, and therefore not learnable, within the usual bayesian scenario.Comment: 9 pages, 8 figures, submitted to PRE, This new version of the paper contains one new section, Bayesian versus optimal solutions, where we explain in detail the results supporting our claim that bayesian learning may not be optimal. Figures 4 of the first submission was difficult to understand. We replaced it by two new figures (Figs. 4 and 5 in this new version) containing more detail

    On the center of mass of Ising vectors

    Full text link
    We show that the center of mass of Ising vectors that obey some simple constraints, is again an Ising vector.Comment: 8 pages, 3 figures, LaTeX; Claims in connection with disordered systems have been withdrawn; More detailed description of the simulations; Inset added to figure

    Slowly evolving geometry in recurrent neural networks I: extreme dilution regime

    Full text link
    We study extremely diluted spin models of neural networks in which the connectivity evolves in time, although adiabatically slowly compared to the neurons, according to stochastic equations which on average aim to reduce frustration. The (fast) neurons and (slow) connectivity variables equilibrate separately, but at different temperatures. Our model is exactly solvable in equilibrium. We obtain phase diagrams upon making the condensed ansatz (i.e. recall of one pattern). These show that, as the connectivity temperature is lowered, the volume of the retrieval phase diverges and the fraction of mis-aligned spins is reduced. Still one always retains a region in the retrieval phase where recall states other than the one corresponding to the `condensed' pattern are locally stable, so the associative memory character of our model is preserved.Comment: 18 pages, 6 figure
    corecore