2 research outputs found

    An efficient automated parameter tuning framework for spiking neural networks

    Get PDF
    As the desire for biologically realistic spiking neural networks (SNNs) increases, tuning the enormous number of open parameters in these models becomes a difficult challenge. SNNs have been used to successfully model complex neural circuits that explore various neural phenomena such as neural plasticity, vision systems, auditory systems, neural oscillations, and many other important topics of neural function. Additionally, SNNs are particularly well-adapted to run on neuromorphic hardware that will support biological brain-scale architectures. Although the inclusion of realistic plasticity equations, neural dynamics, and recurrent topologies has increased the descriptive power of SNNs, it has also made the task of tuning these biologically realistic SNNs difficult. To meet this challenge, we present an automated parameter tuning framework capable of tuning SNNs quickly and efficiently using evolutionary algorithms (EA) and inexpensive, readily accessible graphics processing units (GPUs). A sample SNN with 4104 neurons was tuned to give V1 simple cell-like tuning curve responses and produce self-organizing receptive fields (SORFs) when presented with a random sequence of counterphase sinusoidal grating stimuli. A performance analysis comparing the GPU-accelerated implementation to a single-threaded central processing unit (CPU) implementation was carried out and showed a speedup of 65× of the GPU implementation over the CPU implementation, or 0.35 h per generation for GPU vs. 23.5 h per generation for CPU. Additionally, the parameter value solutions found in the tuned SNN were studied and found to be stable and repeatable. The automated parameter tuning framework presented here will be of use to both the computational neuroscience and neuromorphic engineering communities, making the process of constructing and tuning large-scale SNNs much quicker and easier

    A large-scale neural network model of the influence of neuromodulatory levels on working memory and behavior

    No full text
    The dorsolateral prefrontal cortex (dlPFC), which is regarded as the primary site for visuospatial working memory in the brain, is significantly modulated by dopamine (DA) and norepinephrine (NE). DA and NE originate in the ventral tegmental area (VTA) and locus coeruleus (LC), respectively, and have been shown to have an ‘inverted-U’ dose-response profile in dlPFC, where the level of arousal and decision-making performance is a function of DA and NE concentrations. Moreover, there appears to be a sweet spot, in terms of the level of DA and NE activation, which allows for optimal working memory and behavioral performance. When either DA or NE is too high, input to the PFC is essentially blocked. When either DA or NE is too low, PFC network dynamics become noisy and activity levels diminish. Mechanisms for how this is occurring have been suggested, however, they have not been tested in a large-scale model with neurobiologically plausible network dynamics. Also, DA and NE levels have not been simultaneously manipulated experimentally, which is not realistic in vivo due to strong bi-directional connections between the VTA and LC. To address these issues, we built a spiking neural network model that includes D1, α2A, and α1 receptors. The model was able to match the inverted-U profiles that have been shown experimentally for differing levels of DA and NE. Furthermore, we were able to make predictions about what working memory and behavioral deficits may occur during simultaneous manipulation of DA and NE outside of their optimal levels. Specifically, when DA levels were low and NE levels were high, cues could not be held in working memory due to increased noise. On the other hand, when DA levels were high and NE levels were low, incorrect decisions were made due to weak overall network activity. We also show that lateral inhibition in working memory may play a more important role in increasing signal-to-noise ratio than increasing recurrent excitatory inpu
    corecore