1,737 research outputs found

    Analysis of data systems requirements for global crop production forecasting in the 1985 time frame

    Get PDF
    Data systems concepts that would be needed to implement the objective of the global crop production forecasting in an orderly transition from experimental to operational status in the 1985 time frame were examined. Information needs of users were converted into data system requirements, and the influence of these requirements on the formulation of a conceptual data system was analyzed. Any potential problem areas in meeting these data system requirements were identified in an iterative process

    Stability of Negative Image Equilibria in Spike-Timing Dependent Plasticity

    Full text link
    We investigate the stability of negative image equilibria in mean synaptic weight dynamics governed by spike-timing dependent plasticity (STDP). The neural architecture of the model is based on the electrosensory lateral line lobe (ELL) of mormyrid electric fish, which forms a negative image of the reafferent signal from the fish's own electric discharge to optimize detection of external electric fields. We derive a necessary and sufficient condition for stability, for arbitrary postsynaptic potential functions and arbitrary learning rules. We then apply the general result to several examples of biological interest.Comment: 13 pages, revtex4; uses packages: graphicx, subfigure; 9 figures, 16 subfigure

    Extracting non-linear integrate-and-fire models from experimental data using dynamic I–V curves

    Get PDF
    The dynamic I–V curve method was recently introduced for the efficient experimental generation of reduced neuron models. The method extracts the response properties of a neuron while it is subject to a naturalistic stimulus that mimics in vivo-like fluctuating synaptic drive. The resulting history-dependent, transmembrane current is then projected onto a one-dimensional current–voltage relation that provides the basis for a tractable non-linear integrate-and-fire model. An attractive feature of the method is that it can be used in spike-triggered mode to quantify the distinct patterns of post-spike refractoriness seen in different classes of cortical neuron. The method is first illustrated using a conductance-based model and is then applied experimentally to generate reduced models of cortical layer-5 pyramidal cells and interneurons, in injected-current and injected- conductance protocols. The resulting low-dimensional neuron models—of the refractory exponential integrate-and-fire type—provide highly accurate predictions for spike-times. The method therefore provides a useful tool for the construction of tractable models and rapid experimental classification of cortical neurons

    Supervised Learning in Multilayer Spiking Neural Networks

    Get PDF
    The current article introduces a supervised learning algorithm for multilayer spiking neural networks. The algorithm presented here overcomes some limitations of existing learning algorithms as it can be applied to neurons firing multiple spikes and it can in principle be applied to any linearisable neuron model. The algorithm is applied successfully to various benchmarks, such as the XOR problem and the Iris data set, as well as complex classifications problems. The simulations also show the flexibility of this supervised learning algorithm which permits different encodings of the spike timing patterns, including precise spike trains encoding.Comment: 38 pages, 4 figure

    Desynchronization in diluted neural networks

    Full text link
    The dynamical behaviour of a weakly diluted fully-inhibitory network of pulse-coupled spiking neurons is investigated. Upon increasing the coupling strength, a transition from regular to stochastic-like regime is observed. In the weak-coupling phase, a periodic dynamics is rapidly approached, with all neurons firing with the same rate and mutually phase-locked. The strong-coupling phase is characterized by an irregular pattern, even though the maximum Lyapunov exponent is negative. The paradox is solved by drawing an analogy with the phenomenon of ``stable chaos'', i.e. by observing that the stochastic-like behaviour is "limited" to a an exponentially long (with the system size) transient. Remarkably, the transient dynamics turns out to be stationary.Comment: 11 pages, 13 figures, submitted to Phys. Rev.

    SpikingLab: modelling agents controlled by Spiking Neural Networks in Netlogo

    Get PDF
    The scientific interest attracted by Spiking Neural Networks (SNN) has lead to the development of tools for the simulation and study of neuronal dynamics ranging from phenomenological models to the more sophisticated and biologically accurate Hodgkin-and-Huxley-based and multi-compartmental models. However, despite the multiple features offered by neural modelling tools, their integration with environments for the simulation of robots and agents can be challenging and time consuming. The implementation of artificial neural circuits to control robots generally involves the following tasks: (1) understanding the simulation tools, (2) creating the neural circuit in the neural simulator, (3) linking the simulated neural circuit with the environment of the agent and (4) programming the appropriate interface in the robot or agent to use the neural controller. The accomplishment of the above-mentioned tasks can be challenging, especially for undergraduate students or novice researchers. This paper presents an alternative tool which facilitates the simulation of simple SNN circuits using the multi-agent simulation and the programming environment Netlogo (educational software that simplifies the study and experimentation of complex systems). The engine proposed and implemented in Netlogo for the simulation of a functional model of SNN is a simplification of integrate and fire (I&F) models. The characteristics of the engine (including neuronal dynamics, STDP learning and synaptic delay) are demonstrated through the implementation of an agent representing an artificial insect controlled by a simple neural circuit. The setup of the experiment and its outcomes are described in this work

    Dynamic range in the C.elegans brain network

    Get PDF
    We study external electrical perturbations and their responses in the brain dynamic network of the Caenorhabditis eleganssoil worm, given by the connectome of its large somatic nervous system. Our analysis is inspired by a realistic experiment where one stimulates externally specific parts of the brain and studies the persistent neural activity triggered in other cortical regions. In this work, we perturb groups of neurons that form communities, identified by the walktrap community detection method, by trains of stereotypical electrical Poissonian impulses and study the propagation of neural activity to other communities by measuring the corresponding dynamic ranges and Steven law exponents. We show that when one perturbs specific communities, keeping the rest unperturbed, the external stimulations are able to propagate to some of them but not to all. There are also perturbations that do not trigger any response. We found that this depends on the initially perturbed community. Finally, we relate our findings for the former cases with low neural synchronization, self-criticality, and large information flow capacity, and interpret them as the ability of the brainnetwork to respond to external perturbations when it works at criticality and its information flow capacity becomes maximal

    Adaptation Reduces Variability of the Neuronal Population Code

    Full text link
    Sequences of events in noise-driven excitable systems with slow variables often show serial correlations among their intervals of events. Here, we employ a master equation for general non-renewal processes to calculate the interval and count statistics of superimposed processes governed by a slow adaptation variable. For an ensemble of spike-frequency adapting neurons this results in the regularization of the population activity and an enhanced post-synaptic signal decoding. We confirm our theoretical results in a population of cortical neurons.Comment: 4 pages, 2 figure

    Pattern formation in oscillatory complex networks consisting of excitable nodes

    Full text link
    Oscillatory dynamics of complex networks has recently attracted great attention. In this paper we study pattern formation in oscillatory complex networks consisting of excitable nodes. We find that there exist a few center nodes and small skeletons for most oscillations. Complicated and seemingly random oscillatory patterns can be viewed as well-organized target waves propagating from center nodes along the shortest paths, and the shortest loops passing through both the center nodes and their driver nodes play the role of oscillation sources. Analyzing simple skeletons we are able to understand and predict various essential properties of the oscillations and effectively modulate the oscillations. These methods and results will give insights into pattern formation in complex networks, and provide suggestive ideas for studying and controlling oscillations in neural networks.Comment: 15 pages, 7 figures, to appear in Phys. Rev.
    corecore