13 research outputs found
A Compositionality Machine Realized by a Hierarchic Architecture of Synfire Chains
The composition of complex behavior is thought to rely on the concurrent and sequential activation of simpler action components, or primitives. Systems of synfire chains have previously been proposed to account for either the simultaneous or the sequential aspects of compositionality; however, the compatibility of the two aspects has so far not been addressed. Moreover, the simultaneous activation of primitives has up until now only been investigated in the context of reactive computations, i.e., the perception of stimuli. In this study we demonstrate how a hierarchical organization of synfire chains is capable of generating both aspects of compositionality for proactive computations such as the generation of complex and ongoing action. To this end, we develop a network model consisting of two layers of synfire chains. Using simple drawing strokes as a visualization of abstract primitives, we map the feed-forward activity of the upper level synfire chains to motion in two-dimensional space. Our model is capable of producing drawing strokes that are combinations of primitive strokes by binding together the corresponding chains. Moreover, when the lower layer of the network is constructed in a closed-loop fashion, drawing strokes are generated sequentially. The generated pattern can be random or deterministic, depending on the connection pattern between the lower level chains. We propose quantitative measures for simultaneity and sequentiality, revealing a wide parameter range in which both aspects are fulfilled. Finally, we investigate the spiking activity of our model to propose candidate signatures of synfire chain computation in measurements of neural activity during action execution
Limits to the Development of Feed-Forward Structures in Large Recurrent Neuronal Networks
Spike-timing dependent plasticity (STDP) has traditionally been of great interest to theoreticians, as it seems to provide an answer to the question of how the brain can develop functional structure in response to repeated stimuli. However, despite this high level of interest, convincing demonstrations of this capacity in large, initially random networks have not been forthcoming. Such demonstrations as there are typically rely on constraining the problem artificially. Techniques include employing additional pruning mechanisms or STDP rules that enhance symmetry breaking, simulating networks with low connectivity that magnify competition between synapses, or combinations of the above. In this paper, we first review modeling choices that carry particularly high risks of producing non-generalizable results in the context of STDP in recurrent networks. We then develop a theory for the development of feed-forward structure in random networks and conclude that an unstable fixed point in the dynamics prevents the stable propagation of structure in recurrent networks with weight-dependent STDP. We demonstrate that the key predictions of the theory hold in large-scale simulations. The theory provides insight into the reasons why such development does not take place in unconstrained systems and enables us to identify biologically motivated candidate adaptations to the balanced random network model that might enable it
Limits to the Development of Feed-Forward Structures in Large Recurrent Neuronal Networks
Spike-timing dependent plasticity (STDP) has traditionally been of great interest to theoreticians, as it seems to provide an answer to the question of how the brain can develop functional structure in response to repeated stimuli. However, despite this high level of interest, convincing demonstrations of this capacity in large, initially random networks have not been forthcoming. Such demonstrations as there are typically rely on constraining the problem artificially. Techniques include employing additional pruning mechanisms or STDP rules that enhance symmetry breaking, simulating networks with low connectivity that magnify competition between synapses, or combinations of the above. In this paper, we first review modeling choices that carry particularly high risks of producing non-generalizable results in the context of STDP in recurrent networks. We then develop a theory for the development of feed-forward structure in random networks and conclude that an unstable fixed point in the dynamics prevents the stable propagation of structure in recurrent networks with weight-dependent STDP. We demonstrate that the key predictions of the theory hold in large-scale simulations. The theory provides insight into the reasons why such development does not take place in unconstrained systems and enables us to identify biologically motivated candidate adaptations to the balanced random network model that might enable it
Meeting the Memory Challenges of Brain-Scale Network Simulation
The development of high-performance simulation software is crucial for studying the brain connectome. Using connectome data to generate neurocomputational models requires software capable of coping with models on a variety of scales: from the microscale, investigating plasticity, and dynamics of circuits in local networks, to the macroscale, investigating the interactions between distinct brain regions. Prior to any serious dynamical investigation, the first task of network simulations is to check the consistency of data integrated in the connectome and constrain ranges for yet unknown parameters. Thanks to distributed computing techniques, it is possible today to routinely simulate local cortical networks of around 105 neurons with up to 109 synapses on clusters and multi-processor shared-memory machines. However, brain-scale networks are orders of magnitude larger than such local networks, in terms of numbers of neurons and synapses as well as in terms of computational load. Such networks have been investigated in individual studies, but the underlying simulation technologies have neither been described in sufficient detail to be reproducible nor made publicly available. Here, we discover that as the network model sizes approach the regime of meso- and macroscale simulations, memory consumption on individual compute nodes becomes a critical bottleneck. This is especially relevant on modern supercomputers such as the Blue Gene/P architecture where the available working memory per CPU core is rather limited. We develop a simple linear model to analyze the memory consumption of the constituent components of neuronal simulators as a function of network size and the number of cores used. This approach has multiple benefits. The model enables identification of key contributing components to memory saturation and prediction of the effects of potential improvements to code before any implementation takes place. As a consequence, development cycles can be shorter and less expensive. Applying the model to our freely available Neural Simulation Tool (NEST), we identify the software components dominant at different scales, and develop general strategies for reducing the memory consumption, in particular by using data structures that exploit the sparseness of the local representation of the network. We show that these adaptations enable our simulation software to scale up to the order of 10,000 processors and beyond. As memory consumption issues are likely to be relevant for any software dealing with complex connectome data on such architectures, our approach and our findings should be useful for researchers developing novel neuroinformatics solutions to the challenges posed by the connectome project
Is a 4-bit synaptic weight resolution enough? - Constraints on enabling spike-timing dependent plasticity in neuromorphic hardware
Large-scale neuromorphic hardware systems typically bear the trade-off
between detail level and required chip resources. Especially when implementing
spike-timing-dependent plasticity, reduction in resources leads to limitations
as compared to floating point precision. By design, a natural modification that
saves resources would be reducing synaptic weight resolution. In this study, we
give an estimate for the impact of synaptic weight discretization on different
levels, ranging from random walks of individual weights to computer simulations
of spiking neural networks. The FACETS wafer-scale hardware system offers a
4-bit resolution of synaptic weights, which is shown to be sufficient within
the scope of our network benchmark. Our findings indicate that increasing the
resolution may not even be useful in light of further restrictions of
customized mixed-signal synapses. In addition, variations due to production
imperfections are investigated and shown to be uncritical in the context of the
presented study. Our results represent a general framework for setting up and
configuring hardware-constrained synapses. We suggest how weight discretization
could be considered for other backends dedicated to large-scale simulations.
Thus, our proposition of a good hardware verification practice may rise synergy
effects between hardware developers and neuroscientists
Layer-Dependent Attentional Processing by Top-down Signals in a Visual Cortical Microcircuit Model
A vast amount of information about the external world continuously flows into the brain, whereas its capacity to process such information is limited. Attention enables the brain to allocate its resources of information processing to selected sensory inputs for reducing its computational load, and effects of attention have been extensively studied in visual information processing. However, how the microcircuit of the visual cortex processes attentional information from higher areas remains largely unknown. Here, we explore the complex interactions between visual inputs and an attentional signal in a computational model of the visual cortical microcircuit. Our model not only successfully accounts for previous experimental observations of attentional effects on visual neuronal responses, but also predicts contrasting differences in the attentional effects of top-down signals between cortical layers: attention to a preferred stimulus of a column enhances neuronal responses of layers 2/3 and 5, the output stations of cortical microcircuits, whereas attention suppresses neuronal responses of layer 4, the input station of cortical microcircuits. We demonstrate that the specific modulation pattern of layer-4 activity, which emerges from inter-laminar synaptic connections, is crucial for a rapid shift of attention to a currently unattended stimulus. Our results suggest that top-down signals act differently on different layers of the cortical microcircuit
Compositionality in neural control: an interdisciplinary study of scribbling movements in primates
This article discusses the compositional structure of hand movements by analyzing and modeling neural and behavioral data obtained from experiments where a monkey (Macaca fascicularis) performed scribbling movements induced by a search task. Using geometrically based approaches to movement segmentation, it is shown that the hand trajectories are composed of elementary segments that are primarily parabolic in shape. The segments could be categorized into a small number of classes on the basis of decreasing intra-class variance over the course of training. A separate classification of the neural data employing a hidden Markov model showed a coincidence of the neural states with the behavioral categories. An additional analysis of both types of data by a data mining method provided evidence that the neural activity patterns underlying the behavioral primitives were formed by sets of specific and precise spike patterns. A geometric description of the movement trajectories, together with precise neural timing data indicates a compositional variant of a realistic synfire chain model. This model reproduces the typical shapes and temporal properties of the trajectories; hence the structure and composition of the primitives may reflect meaningful behavior
CoCoMac 2.0 and the future of tract-tracing databases
The CoCoMac database contains the results of published axonal tract-tracing studies in the macaque brain. The combined data are used to construct the macaque macro-connectome. We discuss the redevelopment of CoCoMac and compare it to six connectome-related projects: two resources that provide online access to raw tracing data in rodents, a connectome viewer for advanced 3d graphics, a partial but highly detailed rat connectome, a brain data management system that generates custom connectivity matrices, and a software package that covers the complete pipeline from connectivity data to large scale brain simulations.The 2nd edition of CoCoMac features many enhancements over the original. For example, a search wizard is provided for full access to all tables. Connectivity matrices are computed on demand in a user selected nomenclature. An online data entry system is available as a preview, and is to become a generic solution for community-driven manual data entry.We end with the question whether tract-tracing will remain the gold standard to uncover the wiring of brains, thereby mentioning developments in human connectome construction, tracer substances, polarized light imaging and serial block face scanning electron microscopy
Finite post synaptic potentials cause a fast neuronal response
A generic property of the communication between neurons is the exchange of pulsesat discrete time points, the action potentials. However, the prevalenttheory of spiking neuronal networks of integrate-and-fire model neuronsrelies on two assumptions: the superposition of many afferent synapticimpulses is approximated by Gaussian white noise, equivalent to avanishing magnitude of the synaptic impulses, and the transfer oftime varying signals by neurons is assessable by linearization. Goingbeyond both approximations, we find that in the presence of synapticimpulses the response to transient inputs differs qualitatively fromprevious predictions. It is instantaneous rather than exhibiting low-passcharacteristics, depends non-linearly on the amplitude of the impulse,is asymmetric for excitation and inhibition and is promoted by a characteristiclevel of synaptic background noise. These findings resolve contradictionsbetween the earlier theory and experimental observations. Here wereview the recent theoretical progress that enabled these insights.We explain why the membrane potential near threshold is sensitiveto properties of the afferent noise and show how this shapes the neuralresponse. A further extension of the theory to time evolution in discretesteps quantifies simulation artifacts and yields improved methodsto cross check results
Finite post synaptic potentials cause a fast neuronal response
A generic property of the communication between neurons is the exchange of pulsesat discrete time points, the action potentials. However, the prevalenttheory of spiking neuronal networks of integrate-and-fire model neuronsrelies on two assumptions: the superposition of many afferent synapticimpulses is approximated by Gaussian white noise, equivalent to avanishing magnitude of the synaptic impulses, and the transfer oftime varying signals by neurons is assessable by linearization. Goingbeyond both approximations, we find that in the presence of synapticimpulses the response to transient inputs differs qualitatively fromprevious predictions. It is instantaneous rather than exhibiting low-passcharacteristics, depends non-linearly on the amplitude of the impulse,is asymmetric for excitation and inhibition and is promoted by a characteristiclevel of synaptic background noise. These findings resolve contradictionsbetween the earlier theory and experimental observations. Here wereview the recent theoretical progress that enabled these insights.We explain why the membrane potential near threshold is sensitiveto properties of the afferent noise and show how this shapes the neuralresponse. A further extension of the theory to time evolution in discretesteps quantifies simulation artifacts and yields improved methodsto cross check results