824 research outputs found

    Online Learning with Ensembles

    Full text link
    Supervised online learning with an ensemble of students randomized by the choice of initial conditions is analyzed. For the case of the perceptron learning rule, asymptotically the same improvement in the generalization error of the ensemble compared to the performance of a single student is found as in Gibbs learning. For more optimized learning rules, however, using an ensemble yields no improvement. This is explained by showing that for any learning rule ff a transform f~\tilde{f} exists, such that a single student using f~\tilde{f} has the same generalization behaviour as an ensemble of ff-students.Comment: 8 pages, 1 figure. Submitted to J.Phys.

    The Little-Hopfield model on a Random Graph

    Full text link
    We study the Hopfield model on a random graph in scaling regimes where the average number of connections per neuron is a finite number and where the spin dynamics is governed by a synchronous execution of the microscopic update rule (Little-Hopfield model).We solve this model within replica symmetry and by using bifurcation analysis we prove that the spin-glass/paramagnetic and the retrieval/paramagnetictransition lines of our phase diagram are identical to those of sequential dynamics.The first-order retrieval/spin-glass transition line follows by direct evaluation of our observables using population dynamics. Within the accuracy of numerical precision and for sufficiently small values of the connectivity parameter we find that this line coincides with the corresponding sequential one. Comparison with simulation experiments shows excellent agreement.Comment: 14 pages, 4 figure

    On-Line AdaTron Learning of Unlearnable Rules

    Full text link
    We study the on-line AdaTron learning of linearly non-separable rules by a simple perceptron. Training examples are provided by a perceptron with a non-monotonic transfer function which reduces to the usual monotonic relation in a certain limit. We find that, although the on-line AdaTron learning is a powerful algorithm for the learnable rule, it does not give the best possible generalization error for unlearnable problems. Optimization of the learning rate is shown to greatly improve the performance of the AdaTron algorithm, leading to the best possible generalization error for a wide range of the parameter which controls the shape of the transfer function.)Comment: RevTeX 17 pages, 8 figures, to appear in Phys.Rev.

    Effects of Water Stress on Seed Production in Ruzi Grass \u3ci\u3e(Brachiaria ruziziensis Germain and Everard)\u3c/i\u3e

    Get PDF
    Water stress at different stages of reproductive development influenced seed yield in Ruzi grass differently. Under mild water stress, the earlier in the reproductive developmental stage the stress was applied (before ear emergence) the faster the plants recovered and the less the ultimate damage to inflorescence structure and seed set compared with the situation where water stress occurred during the later stages after inflorescences had emerged. Conversely, severe water stress before ear emergence had a severe effect in damaging both inflorescence numbers and seed quality. Permanent damage to the reproductive structures resulted in deformed inflorescences. Moreover, basal vegetative tillers were stunted and were capable of only limited regrowth after re-watering

    Thermodynamic properties of extremely diluted symmetric Q-Ising neural networks

    Full text link
    Using the replica-symmetric mean-field theory approach the thermodynamic and retrieval properties of extremely diluted {\it symmetric} QQ-Ising neural networks are studied. In particular, capacity-gain parameter and capacity-temperature phase diagrams are derived for Q=3,4Q=3, 4 and Q=Q=\infty. The zero-temperature results are compared with those obtained from a study of the dynamics of the model. Furthermore, the de Almeida-Thouless line is determined. Where appropriate, the difference with other QQ-Ising architectures is outlined.Comment: 16 pages Latex including 6 eps-figures. Corrections, also in most of the figures have been mad

    Multi-Choice Minority Game

    Full text link
    The generalization of the problem of adaptive competition, known as the minority game, to the case of KK possible choices for each player is addressed, and applied to a system of interacting perceptrons with input and output units of the type of KK-states Potts-spins. An optimal solution of this minority game as well as the dynamic evolution of the adaptive strategies of the players are solved analytically for a general KK and compared with numerical simulations.Comment: 5 pages, 2 figures, reorganized and clarifie

    Statistical Mechanics of Learning in the Presence of Outliers

    Full text link
    Using methods of statistical mechanics, we analyse the effect of outliers on the supervised learning of a classification problem. The learning strategy aims at selecting informative examples and discarding outliers. We compare two algorithms which perform the selection either in a soft or a hard way. When the fraction of outliers grows large, the estimation errors undergo a first order phase transition.Comment: 24 pages, 7 figures (minor extensions added

    (S)-3-Dimethyl­amino-2-{(4S,5R)-5-[(R)-2,2-dimethyl-1,3-dioxolan-4-yl]-2,2-dimethyl-1,3-dioxolan-4-yl}-2-hydroxy­propanoic acid

    Get PDF
    The Kiliani reaction on 1-de­oxy-(N,N-dimethyl­amino)-d-fructose, itself readily available from reaction of dimethyl­amine and d-glucose, proceeded to give access to the title β-sugar amino acid, C15H27NO7. X-ray crystallography determined the stereochemistry at the newly formed chiral center. There are two mol­ecules in the asymmetric unit; they are related by a pseudo-twofold rotation axis and have very similar geometries, differing only in the conformation of one of the acetonide rings. All the acetonide rings adopt envelope conformations; the flap atom is oxygen in three of the rings, but carbon in one of them. There are two strong hydrogen bonds between the two independent mol­ecules, and further weak hydrogen bonds link the mol­ecules to form infinite chains running parallel to the a axis

    Statistical Mechanics of Support Vector Networks

    Get PDF
    Using methods of Statistical Physics, we investigate the generalization performance of support vector machines (SVMs), which have been recently introduced as a general alternative to neural networks. For nonlinear classification rules, the generalization error saturates on a plateau, when the number of examples is too small to properly estimate the coefficients of the nonlinear part. When trained on simple rules, we find that SVMs overfit only weakly. The performance of SVMs is strongly enhanced, when the distribution of the inputs has a gap in feature space.Comment: REVTeX, 4 pages, 2 figures, accepted by Phys. Rev. Lett (typos corrected
    corecore