425 research outputs found

    The Little-Hopfield model on a Random Graph

    Full text link
    We study the Hopfield model on a random graph in scaling regimes where the average number of connections per neuron is a finite number and where the spin dynamics is governed by a synchronous execution of the microscopic update rule (Little-Hopfield model).We solve this model within replica symmetry and by using bifurcation analysis we prove that the spin-glass/paramagnetic and the retrieval/paramagnetictransition lines of our phase diagram are identical to those of sequential dynamics.The first-order retrieval/spin-glass transition line follows by direct evaluation of our observables using population dynamics. Within the accuracy of numerical precision and for sufficiently small values of the connectivity parameter we find that this line coincides with the corresponding sequential one. Comparison with simulation experiments shows excellent agreement.Comment: 14 pages, 4 figure

    Effects of Water Stress on Seed Production in Ruzi Grass \u3ci\u3e(Brachiaria ruziziensis Germain and Everard)\u3c/i\u3e

    Get PDF
    Water stress at different stages of reproductive development influenced seed yield in Ruzi grass differently. Under mild water stress, the earlier in the reproductive developmental stage the stress was applied (before ear emergence) the faster the plants recovered and the less the ultimate damage to inflorescence structure and seed set compared with the situation where water stress occurred during the later stages after inflorescences had emerged. Conversely, severe water stress before ear emergence had a severe effect in damaging both inflorescence numbers and seed quality. Permanent damage to the reproductive structures resulted in deformed inflorescences. Moreover, basal vegetative tillers were stunted and were capable of only limited regrowth after re-watering

    Thermodynamic properties of extremely diluted symmetric Q-Ising neural networks

    Full text link
    Using the replica-symmetric mean-field theory approach the thermodynamic and retrieval properties of extremely diluted {\it symmetric} QQ-Ising neural networks are studied. In particular, capacity-gain parameter and capacity-temperature phase diagrams are derived for Q=3,4Q=3, 4 and Q=Q=\infty. The zero-temperature results are compared with those obtained from a study of the dynamics of the model. Furthermore, the de Almeida-Thouless line is determined. Where appropriate, the difference with other QQ-Ising architectures is outlined.Comment: 16 pages Latex including 6 eps-figures. Corrections, also in most of the figures have been mad

    Phase transitions in optimal unsupervised learning

    Full text link
    We determine the optimal performance of learning the orientation of the symmetry axis of a set of P = alpha N points that are uniformly distributed in all the directions but one on the N-dimensional sphere. The components along the symmetry breaking direction, of unitary vector B, are sampled from a mixture of two gaussians of variable separation and width. The typical optimal performance is measured through the overlap Ropt=B.J* where J* is the optimal guess of the symmetry breaking direction. Within this general scenario, the learning curves Ropt(alpha) may present first order transitions if the clusters are narrow enough. Close to these transitions, high performance states can be obtained through the minimization of the corresponding optimal potential, although these solutions are metastable, and therefore not learnable, within the usual bayesian scenario.Comment: 9 pages, 8 figures, submitted to PRE, This new version of the paper contains one new section, Bayesian versus optimal solutions, where we explain in detail the results supporting our claim that bayesian learning may not be optimal. Figures 4 of the first submission was difficult to understand. We replaced it by two new figures (Figs. 4 and 5 in this new version) containing more detail

    Slowly evolving geometry in recurrent neural networks I: extreme dilution regime

    Full text link
    We study extremely diluted spin models of neural networks in which the connectivity evolves in time, although adiabatically slowly compared to the neurons, according to stochastic equations which on average aim to reduce frustration. The (fast) neurons and (slow) connectivity variables equilibrate separately, but at different temperatures. Our model is exactly solvable in equilibrium. We obtain phase diagrams upon making the condensed ansatz (i.e. recall of one pattern). These show that, as the connectivity temperature is lowered, the volume of the retrieval phase diverges and the fraction of mis-aligned spins is reduced. Still one always retains a region in the retrieval phase where recall states other than the one corresponding to the `condensed' pattern are locally stable, so the associative memory character of our model is preserved.Comment: 18 pages, 6 figure

    Replicated Transfer Matrix Analysis of Ising Spin Models on `Small World' Lattices

    Full text link
    We calculate equilibrium solutions for Ising spin models on `small world' lattices, which are constructed by super-imposing random and sparse Poissonian graphs with finite average connectivity c onto a one-dimensional ring. The nearest neighbour bonds along the ring are ferromagnetic, whereas those corresponding to the Poisonnian graph are allowed to be random. Our models thus generally contain quenched connectivity and bond disorder. Within the replica formalism, calculating the disorder-averaged free energy requires the diagonalization of replicated transfer matrices. In addition to developing the general replica symmetric theory, we derive phase diagrams and calculate effective field distributions for two specific cases: that of uniform sparse long-range bonds (i.e. `small world' magnets), and that of (+J/-J) random sparse long-range bonds (i.e. `small world' spin-glasses).Comment: 22 pages, LaTeX, IOP macros, eps figure

    Generalizing with perceptrons in case of structured phase- and pattern-spaces

    Full text link
    We investigate the influence of different kinds of structure on the learning behaviour of a perceptron performing a classification task defined by a teacher rule. The underlying pattern distribution is permitted to have spatial correlations. The prior distribution for the teacher coupling vectors itself is assumed to be nonuniform. Thus classification tasks of quite different difficulty are included. As learning algorithms we discuss Hebbian learning, Gibbs learning, and Bayesian learning with different priors, using methods from statistics and the replica formalism. We find that the Hebb rule is quite sensitive to the structure of the actual learning problem, failing asymptotically in most cases. Contrarily, the behaviour of the more sophisticated methods of Gibbs and Bayes learning is influenced by the spatial correlations only in an intermediate regime of α\alpha, where α\alpha specifies the size of the training set. Concerning the Bayesian case we show, how enhanced prior knowledge improves the performance.Comment: LaTeX, 32 pages with eps-figs, accepted by J Phys

    Statistical Mechanics of Soft Margin Classifiers

    Full text link
    We study the typical learning properties of the recently introduced Soft Margin Classifiers (SMCs), learning realizable and unrealizable tasks, with the tools of Statistical Mechanics. We derive analytically the behaviour of the learning curves in the regime of very large training sets. We obtain exponential and power laws for the decay of the generalization error towards the asymptotic value, depending on the task and on general characteristics of the distribution of stabilities of the patterns to be learned. The optimal learning curves of the SMCs, which give the minimal generalization error, are obtained by tuning the coefficient controlling the trade-off between the error and the regularization terms in the cost function. If the task is realizable by the SMC, the optimal performance is better than that of a hard margin Support Vector Machine and is very close to that of a Bayesian classifier.Comment: 26 pages, 12 figures, submitted to Physical Review

    Retarded Learning: Rigorous Results from Statistical Mechanics

    Full text link
    We study learning of probability distributions characterized by an unknown symmetry direction. Based on an entropic performance measure and the variational method of statistical mechanics we develop exact upper and lower bounds on the scaled critical number of examples below which learning of the direction is impossible. The asymptotic tightness of the bounds suggests an asymptotically optimal method for learning nonsmooth distributions.Comment: 8 pages, 1 figur
    corecore