908 research outputs found

    Neural Information Processing: between synchrony and chaos

    Get PDF
    The brain is characterized by performing many different processing tasks ranging from elaborate processes such as pattern recognition, memory or decision-making to more simple functionalities such as linear filtering in image processing. Understanding the mechanisms by which the brain is able to produce such a different range of cortical operations remains a fundamental problem in neuroscience. Some recent empirical and theoretical results support the notion that the brain is naturally poised between ordered and chaotic states. As the largest number of metastable states exists at a point near the transition, the brain therefore has access to a larger repertoire of behaviours. Consequently, it is of high interest to know which type of processing can be associated with both ordered and disordered states. Here we show an explanation of which processes are related to chaotic and synchronized states based on the study of in-silico implementation of biologically plausible neural systems. The measurements obtained reveal that synchronized cells (that can be understood as ordered states of the brain) are related to non-linear computations, while uncorrelated neural ensembles are excellent information transmission systems that are able to implement linear transformations (as the realization of convolution products) and to parallelize neural processes. From these results we propose a plausible meaning for Hebbian and non-Hebbian learning rules as those biophysical mechanisms by which the brain creates ordered or chaotic ensembles depending on the desired functionality. The measurements that we obtain from the hardware implementation of different neural systems endorse the fact that the brain is working with two different states, ordered and chaotic, with complementary functionalities that imply non-linear processing (synchronized states) and information transmission and convolution (chaotic states)

    Synchronous Behavior of Two Coupled Electronic Neurons

    Full text link
    We report on experimental studies of synchronization phenomena in a pair of analog electronic neurons (ENs). The ENs were designed to reproduce the observed membrane voltage oscillations of isolated biological neurons from the stomatogastric ganglion of the California spiny lobster Panulirus interruptus. The ENs are simple analog circuits which integrate four dimensional differential equations representing fast and slow subcellular mechanisms that produce the characteristic regular/chaotic spiking-bursting behavior of these cells. In this paper we study their dynamical behavior as we couple them in the same configurations as we have done for their counterpart biological neurons. The interconnections we use for these neural oscillators are both direct electrical connections and excitatory and inhibitory chemical connections: each realized by analog circuitry and suggested by biological examples. We provide here quantitative evidence that the ENs and the biological neurons behave similarly when coupled in the same manner. They each display well defined bifurcations in their mutual synchronization and regularization. We report briefly on an experiment on coupled biological neurons and four dimensional ENs which provides further ground for testing the validity of our numerical and electronic models of individual neural behavior. Our experiments as a whole present interesting new examples of regularization and synchronization in coupled nonlinear oscillators.Comment: 26 pages, 10 figure

    Neuro-memristive Circuits for Edge Computing: A review

    Full text link
    The volume, veracity, variability, and velocity of data produced from the ever-increasing network of sensors connected to Internet pose challenges for power management, scalability, and sustainability of cloud computing infrastructure. Increasing the data processing capability of edge computing devices at lower power requirements can reduce several overheads for cloud computing solutions. This paper provides the review of neuromorphic CMOS-memristive architectures that can be integrated into edge computing devices. We discuss why the neuromorphic architectures are useful for edge devices and show the advantages, drawbacks and open problems in the field of neuro-memristive circuits for edge computing

    Resonate and Fire Neuron with Fixed Magnetic Skyrmions

    Full text link
    In the brain, the membrane potential of many neurons oscillates in a subthreshold damped fashion and fire when excited by an input frequency that nearly equals their eigen frequency. In this work, we investigate theoretically the artificial implementation of such "resonate-and-fire" neurons by utilizing the magnetization dynamics of a fixed magnetic skyrmion in the free layer of a magnetic tunnel junction (MTJ). To realize firing of this nanomagnetic implementation of an artificial neuron, we propose to employ voltage control of magnetic anisotropy or voltage generated strain as an input (spike or sinusoidal) signal, which modulates the perpendicular magnetic anisotropy (PMA). This results in continual expansion and shrinking (i.e. breathing) of a skyrmion core that mimics the subthreshold oscillation. Any subsequent input pulse having an interval close to the breathing period or a sinusoidal input close to the eigen frequency drives the magnetization dynamics of the fixed skyrmion in a resonant manner. The time varying electrical resistance of the MTJ layer due to this resonant oscillation of the skyrmion core is used to drive a Complementary Metal Oxide Semiconductor (CMOS) buffer circuit, which produces spike outputs. By rigorous micromagnetic simulation, we investigate the interspike timing dependence and response to different excitatory and inhibitory incoming input pulses. Finally, we show that such resonate and fire neurons have potential application in coupled nanomagnetic oscillator based associative memory arrays

    Decorrelation of neural-network activity by inhibitory feedback

    Get PDF
    Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent theoretical and experimental studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. By means of a linear network model and simulations of networks of leaky integrate-and-fire neurons, we show that shared-input correlations are efficiently suppressed by inhibitory feedback. To elucidate the effect of feedback, we compare the responses of the intact recurrent network and systems where the statistics of the feedback channel is perturbed. The suppression of spike-train correlations and population-rate fluctuations by inhibitory feedback can be observed both in purely inhibitory and in excitatory-inhibitory networks. The effect is fully understood by a linear theory and becomes already apparent at the macroscopic level of the population averaged activity. At the microscopic level, shared-input correlations are suppressed by spike-train correlations: In purely inhibitory networks, they are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between excitatory (E) and inhibitory (I) neurons, but a consequence of a particular structure of correlations among the three possible pairings (EE, EI, II)

    Opening the “Black Box” of Silicon Chip Design in Neuromorphic Computing

    Get PDF
    Neuromorphic computing, a bio-inspired computing architecture that transfers neuroscience to silicon chip, has potential to achieve the same level of computation and energy efficiency as mammalian brains. Meanwhile, three-dimensional (3D) integrated circuit (IC) design with non-volatile memory crossbar array uniquely unveils its intrinsic vector-matrix computation with parallel computing capability in neuromorphic computing designs. In this chapter, the state-of-the-art research trend on electronic circuit designs of neuromorphic computing will be introduced. Furthermore, a practical bio-inspired spiking neural network with delay-feedback topology will be discussed. In the endeavor to imitate how human beings process information, our fabricated spiking neural network chip has capability to process analog signal directly, resulting in high energy efficiency with small hardware implementation cost. Mimicking the neurological structure of mammalian brains, the potential of 3D-IC implementation technique with memristive synapses is investigated. Finally, applications on the chaotic time series prediction and the video frame recognition will be demonstrated

    Six networks on a universal neuromorphic computing substrate

    Get PDF
    In this study, we present a highly configurable neuromorphic computing substrate and use it for emulating several types of neural networks. At the heart of this system lies a mixed-signal chip, with analog implementations of neurons and synapses and digital transmission of action potentials. Major advantages of this emulation device, which has been explicitly designed as a universal neural network emulator, are its inherent parallelism and high acceleration factor compared to conventional computers. Its configurability allows the realization of almost arbitrary network topologies and the use of widely varied neuronal and synaptic parameters. Fixed-pattern noise inherent to analog circuitry is reduced by calibration routines. An integrated development environment allows neuroscientists to operate the device without any prior knowledge of neuromorphic circuit design. As a showcase for the capabilities of the system, we describe the successful emulation of six different neural networks which cover a broad spectrum of both structure and functionality
    corecore