728 research outputs found

    Six networks on a universal neuromorphic computing substrate

    Get PDF
    In this study, we present a highly configurable neuromorphic computing substrate and use it for emulating several types of neural networks. At the heart of this system lies a mixed-signal chip, with analog implementations of neurons and synapses and digital transmission of action potentials. Major advantages of this emulation device, which has been explicitly designed as a universal neural network emulator, are its inherent parallelism and high acceleration factor compared to conventional computers. Its configurability allows the realization of almost arbitrary network topologies and the use of widely varied neuronal and synaptic parameters. Fixed-pattern noise inherent to analog circuitry is reduced by calibration routines. An integrated development environment allows neuroscientists to operate the device without any prior knowledge of neuromorphic circuit design. As a showcase for the capabilities of the system, we describe the successful emulation of six different neural networks which cover a broad spectrum of both structure and functionality

    Memory and information processing in neuromorphic systems

    Full text link
    A striking difference between brain-inspired neuromorphic processors and current von Neumann processors architectures is the way in which memory and processing is organized. As Information and Communication Technologies continue to address the need for increased computational power through the increase of cores within a digital processor, neuromorphic engineers and scientists can complement this need by building processor architectures where memory is distributed with the processing. In this paper we present a survey of brain-inspired processor architectures that support models of cortical networks and deep neural networks. These architectures range from serial clocked implementations of multi-neuron systems to massively parallel asynchronous ones and from purely digital systems to mixed analog/digital systems which implement more biological-like models of neurons and synapses together with a suite of adaptation and learning mechanisms analogous to the ones found in biological nervous systems. We describe the advantages of the different approaches being pursued and present the challenges that need to be addressed for building artificial neural processing systems that can display the richness of behaviors seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed neuromorphic computing platforms and system

    Learning and stabilization of winner-take-all dynamics through interacting excitatory and inhibitory plasticity

    Get PDF
    Winner-Take-All (WTA) networks are recurrently connected populations of excitatory and inhibitory neurons that represent promising candidate microcircuits for implementing cortical computation. WTAs can perform powerful computations, ranging from signal restoration to state-dependent processing. However, such networks require fine tuned connectivity parameters to keep the network dynamics within stable operating regimes. In this article, we show how such stability can emerge autonomously through an interaction of biologically plausible plasticity mechanisms that operate simultaneously on all excitatory and inhibitory synapses of the network. A weight-dependent plasticity rule is derived from the triplet spike-timing dependent plasticity model, and its stabilization properties in the mean field case are analyzed using contraction theory. Our main result provides simple constraints on the plasticity rule parameters, rather than on the weights themselves, which guarantee stable WTA behavior. The plastic network we present is able to adapt to changing input conditions, and to dynamically adjust its gain, therefore exhibiting self-stabilization mechanisms that are crucial for maintaining stable operation in large networks of interconnected subunits. We show how distributed neural assemblies can adjust their parameters for stable WTA function autonomously while respecting anatomical constraints on neural wiring

    A Computational Investigation of Neural Dynamics and Network Structure

    No full text
    With the overall goal of illuminating the relationship between neural dynamics and neural network structure, this thesis presents a) a computer model of a network infrastructure capable of global broadcast and competition, and b) a study of various convergence properties of spike-timing dependent plasticity (STDP) in a recurrent neural network. The first part of the thesis explores the parameter space of a possible Global Neuronal Workspace (GNW) realised in a novel computational network model using stochastic connectivity. The structure of this model is analysed in light of the characteristic dynamics of a GNW: broadcast, reverberation, and competition. It is found even with careful consideration of the balance between excitation and inhibition, the structural choices do not allow agreement with the GNW dynamics, and the implications of this are addressed. An additional level of competition – access competition – is added, discussed, and found to be more conducive to winner-takes-all competition. The second part of the thesis investigates the formation of synaptic structure due to neural and synaptic dynamics. From previous theoretical and modelling work, it is predicted that homogeneous stimulation in a recurrent neural network with STDP will create a self-stabilising equilibrium amongst synaptic weights, while heterogeneous stimulation will induce structured synaptic changes. A new factor in modulating the synaptic weight equilibrium is suggested from the experimental evidence presented: anti-correlation due to inhibitory neurons. It is observed that the synaptic equilibrium creates competition amongst synapses, and those specifically stimulated during heterogeneous stimulation win out. Further investigation is carried out in order to assess the effect that more complex STDP rules would have on synaptic dynamics, varying parameters of a trace STDP model. There is little qualitative effect on synaptic dynamics under low frequency (< 25Hz) conditions, justifying the use of simple STDP until further experimental or theoretical evidence suggests otherwise

    Competition through selective inhibitory synchrony

    Full text link
    Models of cortical neuronal circuits commonly depend on inhibitory feedback to control gain, provide signal normalization, and to selectively amplify signals using winner-take-all (WTA) dynamics. Such models generally assume that excitatory and inhibitory neurons are able to interact easily, because their axons and dendrites are co-localized in the same small volume. However, quantitative neuroanatomical studies of the dimensions of axonal and dendritic trees of neurons in the neocortex show that this co-localization assumption is not valid. In this paper we describe a simple modification to the WTA circuit design that permits the effects of distributed inhibitory neurons to be coupled through synchronization, and so allows a single WTA to be distributed widely in cortical space, well beyond the arborization of any single inhibitory neuron, and even across different cortical areas. We prove by non-linear contraction analysis, and demonstrate by simulation that distributed WTA sub-systems combined by such inhibitory synchrony are inherently stable. We show analytically that synchronization is substantially faster than winner selection. This circuit mechanism allows networks of independent WTAs to fully or partially compete with each other.Comment: in press at Neural computation; 4 figure

    Neurosystems: brain rhythms and cognitive processing

    Get PDF
    Neuronal rhythms are ubiquitous features of brain dynamics, and are highly correlated with cognitive processing. However, the relationship between the physiological mechanisms producing these rhythms and the functions associated with the rhythms remains mysterious. This article investigates the contributions of rhythms to basic cognitive computations (such as filtering signals by coherence and/or frequency) and to major cognitive functions (such as attention and multi-modal coordination). We offer support to the premise that the physiology underlying brain rhythms plays an essential role in how these rhythms facilitate some cognitive operations.098352 - Wellcome Trust; 5R01NS067199 - NINDS NIH HH

    Characterization and Compensation of Network-Level Anomalies in Mixed-Signal Neuromorphic Modeling Platforms

    Full text link
    Advancing the size and complexity of neural network models leads to an ever increasing demand for computational resources for their simulation. Neuromorphic devices offer a number of advantages over conventional computing architectures, such as high emulation speed or low power consumption, but this usually comes at the price of reduced configurability and precision. In this article, we investigate the consequences of several such factors that are common to neuromorphic devices, more specifically limited hardware resources, limited parameter configurability and parameter variations. Our final aim is to provide an array of methods for coping with such inevitable distortion mechanisms. As a platform for testing our proposed strategies, we use an executable system specification (ESS) of the BrainScaleS neuromorphic system, which has been designed as a universal emulation back-end for neuroscientific modeling. We address the most essential limitations of this device in detail and study their effects on three prototypical benchmark network models within a well-defined, systematic workflow. For each network model, we start by defining quantifiable functionality measures by which we then assess the effects of typical hardware-specific distortion mechanisms, both in idealized software simulations and on the ESS. For those effects that cause unacceptable deviations from the original network dynamics, we suggest generic compensation mechanisms and demonstrate their effectiveness. Both the suggested workflow and the investigated compensation mechanisms are largely back-end independent and do not require additional hardware configurability beyond the one required to emulate the benchmark networks in the first place. We hereby provide a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks
    corecore