7,059 research outputs found

    Dynamics of Neural Networks with Continuous Attractors

    Full text link
    We investigate the dynamics of continuous attractor neural networks (CANNs). Due to the translational invariance of their neuronal interactions, CANNs can hold a continuous family of stationary states. We systematically explore how their neutral stability facilitates the tracking performance of a CANN, which is believed to have wide applications in brain functions. We develop a perturbative approach that utilizes the dominant movement of the network stationary states in the state space. We quantify the distortions of the bump shape during tracking, and study their effects on the tracking performance. Results are obtained on the maximum speed for a moving stimulus to be trackable, and the reaction time to catch up an abrupt change in stimulus.Comment: 6 pages, 7 figures with 4 caption

    Analysis of Oscillator Neural Networks for Sparsely Coded Phase Patterns

    Full text link
    We study a simple extended model of oscillator neural networks capable of storing sparsely coded phase patterns, in which information is encoded both in the mean firing rate and in the timing of spikes. Applying the methods of statistical neurodynamics to our model, we theoretically investigate the model's associative memory capability by evaluating its maximum storage capacities and deriving its basins of attraction. It is shown that, as in the Hopfield model, the storage capacity diverges as the activity level decreases. We consider various practically and theoretically important cases. For example, it is revealed that a dynamically adjusted threshold mechanism enhances the retrieval ability of the associative memory. It is also found that, under suitable conditions, the network can recall patterns even in the case that patterns with different activity levels are stored at the same time. In addition, we examine the robustness with respect to damage of the synaptic connections. The validity of these theoretical results is confirmed by reasonable agreement with numerical simulations.Comment: 23 pages, 11 figure

    Characterizing Self-Developing Biological Neural Networks: A First Step Towards their Application To Computing Systems

    Get PDF
    Carbon nanotubes are often seen as the only alternative technology to silicon transistors. While they are the most likely short-term one, other longer-term alternatives should be studied as well. While contemplating biological neurons as an alternative component may seem preposterous at first sight, significant recent progress in CMOS-neuron interface suggests this direction may not be unrealistic; moreover, biological neurons are known to self-assemble into very large networks capable of complex information processing tasks, something that has yet to be achieved with other emerging technologies. The first step to designing computing systems on top of biological neurons is to build an abstract model of self-assembled biological neural networks, much like computer architects manipulate abstract models of transistors and circuits. In this article, we propose a first model of the structure of biological neural networks. We provide empirical evidence that this model matches the biological neural networks found in living organisms, and exhibits the small-world graph structure properties commonly found in many large and self-organized systems, including biological neural networks. More importantly, we extract the simple local rules and characteristics governing the growth of such networks, enabling the development of potentially large but realistic biological neural networks, as would be needed for complex information processing/computing tasks. Based on this model, future work will be targeted to understanding the evolution and learning properties of such networks, and how they can be used to build computing systems
    corecore