2,633 research outputs found

    Biologically inspired learning in a layered neural net

    Full text link
    A feed-forward neural net with adaptable synaptic weights and fixed, zero or non-zero threshold potentials is studied, in the presence of a global feedback signal that can only have two values, depending on whether the output of the network in reaction to its input is right or wrong. It is found, on the basis of four biologically motivated assumptions, that only two forms of learning are possible, Hebbian and Anti-Hebbian learning. Hebbian learning should take place when the output is right, while there should be Anti-Hebbian learning when the output is wrong. For the Anti-Hebbian part of the learning rule a particular choice is made, which guarantees an adequate average neuronal activity without the need of introducing, by hand, control mechanisms like extremal dynamics. A network with realistic, i.e., non-zero threshold potentials is shown to perform its task of realizing the desired input-output relations best if it is sufficiently diluted, i.e. if only a relatively low fraction of all possible synaptic connections is realized

    Rotation of Late-Type Stars in Praesepe with K2

    Get PDF
    We have Fourier analyzed 941 K2 light curves of likely members of Praesepe, measuring periods for 86% and increasing the number of rotation periods (P) by nearly a factor of four. The distribution of P vs. (V-K), a mass proxy, has three different regimes: (V-K)<1.3, where the rotation rate rapidly slows as mass decreases; 1.3<(V-K)<4.5, where the rotation rate slows more gradually as mass decreases; and (V-K)>4.5, where the rotation rate rapidly increases as mass decreases. In this last regime, there is a bimodal distribution of periods, with few between \sim2 and \sim10 days. We interpret this to mean that once M stars start to slow down, they do so rapidly. The K2 period-color distribution in Praesepe (\sim790 Myr) is much different than in the Pleiades (\sim125 Myr) for late F, G, K, and early-M stars; the overall distribution moves to longer periods, and is better described by 2 line segments. For mid-M stars, the relationship has similarly broad scatter, and is steeper in Praesepe. The diversity of lightcurves and of periodogram types is similar in the two clusters; about a quarter of the periodic stars in both clusters have multiple significant periods. Multi-periodic stars dominate among the higher masses, starting at a bluer color in Praesepe ((V-K)\sim1.5) than in the Pleiades ((V-K)\sim2.6). In Praesepe, there are relatively more light curves that have two widely separated periods, ΔP>\Delta P >6 days. Some of these could be examples of M star binaries where one star has spun down but the other has not.Comment: Accepted by Ap

    A Heterosynaptic Learning Rule for Neural Networks

    Full text link
    In this article we intoduce a novel stochastic Hebb-like learning rule for neural networks that is neurobiologically motivated. This learning rule combines features of unsupervised (Hebbian) and supervised (reinforcement) learning and is stochastic with respect to the selection of the time points when a synapse is modified. Moreover, the learning rule does not only affect the synapse between pre- and postsynaptic neuron, which is called homosynaptic plasticity, but effects also further remote synapses of the pre- and postsynaptic neuron. This more complex form of synaptic plasticity has recently come under investigations in neurobiology and is called heterosynaptic plasticity. We demonstrate that this learning rule is useful in training neural networks by learning parity functions including the exclusive-or (XOR) mapping in a multilayer feed-forward network. We find, that our stochastic learning rule works well, even in the presence of noise. Importantly, the mean learning time increases with the number of patterns to be learned polynomially, indicating efficient learning.Comment: 19 page

    Functional Optimization in Complex Excitable Networks

    Full text link
    We study the effect of varying wiring in excitable random networks in which connection weights change with activity to mold local resistance or facilitation due to fatigue. Dynamic attractors, corresponding to patterns of activity, are then easily destabilized according to three main modes, including one in which the activity shows chaotic hopping among the patterns. We describe phase transitions to this regime, and show a monotonous dependence of critical parameters on the heterogeneity of the wiring distribution. Such correlation between topology and functionality implies, in particular, that tasks which require unstable behavior --such as pattern recognition, family discrimination and categorization-- can be most efficiently performed on highly heterogeneous networks. It also follows a possible explanation for the abundance in nature of scale--free network topologies.Comment: 7 pages, 3 figure

    Bump formation in a binary attractor neural network

    Full text link
    This paper investigates the conditions for the formation of local bumps in the activity of binary attractor neural networks with spatially dependent connectivity. We show that these formations are observed when asymmetry between the activity during the retrieval and learning is imposed. Analytical approximation for the order parameters is derived. The corresponding phase diagram shows a relatively large and stable region, where this effect is observed, although the critical storage and the information capacities drastically decrease inside that region. We demonstrate that the stability of the network, when starting from the bump formation, is larger than the stability when starting even from the whole pattern. Finally, we show a very good agreement between the analytical results and the simulations performed for different topologies of the network.Comment: about 14 page

    Action in cognition: the case of language

    Get PDF
    Empirical research has shown that the processing of words and sentences is accompanied by activation of the brain's motor system in language users. The degree of precision observed in this activation seems to be contingent upon (1) the meaning of a linguistic construction and (2) the depth with which readers process that construction. In addition, neurological evidence shows a correspondence between a disruption in the neural correlates of overt action and the disruption of semantic processing of language about action. These converging lines of evidence can be taken to support the hypotheses that motor processes (1) are recruited to understand language that focuses on actions and (2) contribute a unique element to conceptual representation. This article explores the role of this motor recruitment in language comprehension. It concludes that extant findings are consistent with the theorized existence of multimodal, embodied representations of the referents of words and the meaning carried by language. Further, an integrative conceptualization of “fault tolerant comprehension” is proposed

    Unstable Dynamics, Nonequilibrium Phases and Criticality in Networked Excitable Media

    Full text link
    Here we numerically study a model of excitable media, namely, a network with occasionally quiet nodes and connection weights that vary with activity on a short-time scale. Even in the absence of stimuli, this exhibits unstable dynamics, nonequilibrium phases -including one in which the global activity wanders irregularly among attractors- and 1/f noise while the system falls into the most irregular behavior. A net result is resilience which results in an efficient search in the model attractors space that can explain the origin of certain phenomenology in neural, genetic and ill-condensed matter systems. By extensive computer simulation we also address a relation previously conjectured between observed power-law distributions and the occurrence of a "critical state" during functionality of (e.g.) cortical networks, and describe the precise nature of such criticality in the model.Comment: 18 pages, 9 figure

    Learning by message-passing in networks of discrete synapses

    Get PDF
    We show that a message-passing process allows to store in binary "material" synapses a number of random patterns which almost saturates the information theoretic bounds. We apply the learning algorithm to networks characterized by a wide range of different connection topologies and of size comparable with that of biological systems (e.g. n105106n\simeq10^{5}-10^{6}). The algorithm can be turned into an on-line --fault tolerant-- learning protocol of potential interest in modeling aspects of synaptic plasticity and in building neuromorphic devices.Comment: 4 pages, 3 figures; references updated and minor corrections; accepted in PR

    Supervised Learning in Multilayer Spiking Neural Networks

    Get PDF
    The current article introduces a supervised learning algorithm for multilayer spiking neural networks. The algorithm presented here overcomes some limitations of existing learning algorithms as it can be applied to neurons firing multiple spikes and it can in principle be applied to any linearisable neuron model. The algorithm is applied successfully to various benchmarks, such as the XOR problem and the Iris data set, as well as complex classifications problems. The simulations also show the flexibility of this supervised learning algorithm which permits different encodings of the spike timing patterns, including precise spike trains encoding.Comment: 38 pages, 4 figure
    corecore