77,795 research outputs found

    Spike-Train Responses of a Pair of Hodgkin-Huxley Neurons with Time-Delayed Couplings

    Full text link
    Model calculations have been performed on the spike-train response of a pair of Hodgkin-Huxley (HH) neurons coupled by recurrent excitatory-excitatory couplings with time delay. The coupled, excitable HH neurons are assumed to receive the two kinds of spike-train inputs: the transient input consisting of MM impulses for the finite duration (MM: integer) and the sequential input with the constant interspike interval (ISI). The distribution of the output ISI ToT_{\rm o} shows a rich of variety depending on the coupling strength and the time delay. The comparison is made between the dependence of the output ISI for the transient inputs and that for the sequential inputs.Comment: 19 pages, 4 figure

    Statistical Mechanics of Recurrent Neural Networks I. Statics

    Full text link
    A lecture notes style review of the equilibrium statistical mechanics of recurrent neural networks with discrete and continuous neurons (e.g. Ising, coupled-oscillators). To be published in the Handbook of Biological Physics (North-Holland). Accompanied by a similar review (part II) dealing with the dynamics.Comment: 49 pages, LaTe

    Synchronization and coordination of sequences in two neural ensembles

    Full text link
    There are many types of neural networks involved in the sequential motor behavior of animals. For high species, the control and coordination of the network dynamics is a function of the higher levels of the central nervous system, in particular the cerebellum. However, in many cases, especially for invertebrates, such coordination is the result of direct synaptic connections between small circuits. We show here that even the chaotic sequential activity of small model networks can be coordinated by electrotonic synapses connecting one or several pairs of neurons that belong to two different networks. As an example, we analyzed the coordination and synchronization of the sequential activity of two statocyst model networks of the marine mollusk Clione. The statocysts are gravity sensory organs that play a key role in postural control of the animal and the generation of a complex hunting motor program. Each statocyst network was modeled by a small ensemble of neurons with Lotka-Volterra type dynamics and nonsymmetric inhibitory interactions. We studied how two such networks were synchronized by electrical coupling in the presence of an external signal which lead to winnerless competition among the neurons. We found that as a function of the number and the strength of connections between the two networks, it is possible to coordinate and synchronize the sequences that each network generates with its own chaotic dynamics. In spite of the chaoticity, the coordination of the signals is established through an activation sequence lock for those neurons that are active at a particular instant of time.This work was supported by National Institute of Neurological Disorders and Stroke Grant No. 7R01-NS-38022, National Science Foundation Grant No. EIA-0130708, Fundación BBVA and Spanish MCyT Grant No. BFI2003-07276

    Dynamical principles in neuroscience

    Full text link
    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and Fundación BBVA

    Graph Neural Networks Meet Neural-Symbolic Computing: A Survey and Perspective

    Full text link
    Neural-symbolic computing has now become the subject of interest of both academic and industry research laboratories. Graph Neural Networks (GNN) have been widely used in relational and symbolic domains, with widespread application of GNNs in combinatorial optimization, constraint satisfaction, relational reasoning and other scientific domains. The need for improved explainability, interpretability and trust of AI systems in general demands principled methodologies, as suggested by neural-symbolic computing. In this paper, we review the state-of-the-art on the use of GNNs as a model of neural-symbolic computing. This includes the application of GNNs in several domains as well as its relationship to current developments in neural-symbolic computing.Comment: Updated version, draft of accepted IJCAI2020 Survey Pape

    Deep Virtual Networks for Memory Efficient Inference of Multiple Tasks

    Full text link
    Deep networks consume a large amount of memory by their nature. A natural question arises can we reduce that memory requirement whilst maintaining performance. In particular, in this work we address the problem of memory efficient learning for multiple tasks. To this end, we propose a novel network architecture producing multiple networks of different configurations, termed deep virtual networks (DVNs), for different tasks. Each DVN is specialized for a single task and structured hierarchically. The hierarchical structure, which contains multiple levels of hierarchy corresponding to different numbers of parameters, enables multiple inference for different memory budgets. The building block of a deep virtual network is based on a disjoint collection of parameters of a network, which we call a unit. The lowest level of hierarchy in a deep virtual network is a unit, and higher levels of hierarchy contain lower levels' units and other additional units. Given a budget on the number of parameters, a different level of a deep virtual network can be chosen to perform the task. A unit can be shared by different DVNs, allowing multiple DVNs in a single network. In addition, shared units provide assistance to the target task with additional knowledge learned from another tasks. This cooperative configuration of DVNs makes it possible to handle different tasks in a memory-aware manner. Our experiments show that the proposed method outperforms existing approaches for multiple tasks. Notably, ours is more efficient than others as it allows memory-aware inference for all tasks.Comment: CVPR 201
    corecore