214 research outputs found

    Cell assembly dynamics of sparsely-connected inhibitory networks: a simple model for the collective activity of striatal projection neurons

    Get PDF
    Striatal projection neurons form a sparsely-connected inhibitory network, and this arrangement may be essential for the appropriate temporal organization of behavior. Here we show that a simplified, sparse inhibitory network of Leaky-Integrate-and-Fire neurons can reproduce some key features of striatal population activity, as observed in brain slices [Carrillo-Reid et al., J. Neurophysiology 99 (2008) 1435{1450]. In particular we develop a new metric to determine the conditions under which sparse inhibitory networks form anti-correlated cell assemblies with time-varying activity of individual cells. We found that under these conditions the network displays an input-specific sequence of cell assembly switching, that effectively discriminates similar inputs. Our results support the proposal [Ponzi and Wickens, PLoS Comp Biol 9 (2013) e1002954] that GABAergic connections between striatal projection neurons allow stimulus-selective, temporally-extended sequential activation of cell assemblies. Furthermore, we help to show how altered intrastriatal GABAergic signaling may produce aberrant network-level information processing in disorders such as Parkinson's and Huntington's diseases.Comment: 22 pages, 9 figure

    Fractals in the Nervous System: conceptual Implications for Theoretical Neuroscience

    Get PDF
    This essay is presented with two principal objectives in mind: first, to document the prevalence of fractals at all levels of the nervous system, giving credence to the notion of their functional relevance; and second, to draw attention to the as yet still unresolved issues of the detailed relationships among power law scaling, self-similarity, and self-organized criticality. As regards criticality, I will document that it has become a pivotal reference point in Neurodynamics. Furthermore, I will emphasize the not yet fully appreciated significance of allometric control processes. For dynamic fractals, I will assemble reasons for attributing to them the capacity to adapt task execution to contextual changes across a range of scales. The final Section consists of general reflections on the implications of the reviewed data, and identifies what appear to be issues of fundamental importance for future research in the rapidly evolving topic of this review

    An optimised deep spiking neural network architecture without gradients

    Get PDF
    We present an end-to-end trainable modular event-driven neural architecture that uses local synaptic and threshold adaptation rules to perform transformations between arbitrary spatio-temporal spike patterns. The architecture represents a highly abstracted model of existing Spiking Neural Network (SNN) architectures. The proposed Optimized Deep Event-driven Spiking neural network Architecture (ODESA) can simultaneously learn hierarchical spatio-temporal features at multiple arbitrary time scales. ODESA performs online learning without the use of error back-propagation or the calculation of gradients. Through the use of simple local adaptive selection thresholds at each node, the network rapidly learns to appropriately allocate its neuronal resources at each layer for any given problem without using a real-valued error measure. These adaptive selection thresholds are the central feature of ODESA, ensuring network stability and remarkable robustness to noise as well as to the selection of initial system parameters. Network activations are inherently sparse due to a hard Winner-Take-All (WTA) constraint at each layer. We evaluate the architecture on existing spatio-temporal datasets, including the spike-encoded IRIS and TIDIGITS datasets, as well as a novel set of tasks based on International Morse Code that we created. These tests demonstrate the hierarchical spatio-temporal learning capabilities of ODESA. Through these tests, we demonstrate ODESA can optimally solve practical and highly challenging hierarchical spatio-temporal learning tasks with the minimum possible number of computing nodes.Comment: 18 pages, 6 figure

    Single Biological Neurons as Temporally Precise Spatio-Temporal Pattern Recognizers

    Full text link
    This PhD thesis is focused on the central idea that single neurons in the brain should be regarded as temporally precise and highly complex spatio-temporal pattern recognizers. This is opposed to the prevalent view of biological neurons as simple and mainly spatial pattern recognizers by most neuroscientists today. In this thesis, I will attempt to demonstrate that this is an important distinction, predominantly because the above-mentioned computational properties of single neurons have far-reaching implications with respect to the various brain circuits that neurons compose, and on how information is encoded by neuronal activity in the brain. Namely, that these particular "low-level" details at the single neuron level have substantial system-wide ramifications. In the introduction we will highlight the main components that comprise a neural microcircuit that can perform useful computations and illustrate the inter-dependence of these components from a system perspective. In chapter 1 we discuss the great complexity of the spatio-temporal input-output relationship of cortical neurons that are the result of morphological structure and biophysical properties of the neuron. In chapter 2 we demonstrate that single neurons can generate temporally precise output patterns in response to specific spatio-temporal input patterns with a very simple biologically plausible learning rule. In chapter 3, we use the differentiable deep network analog of a realistic cortical neuron as a tool to approximate the gradient of the output of the neuron with respect to its input and use this capability in an attempt to teach the neuron to perform nonlinear XOR operation. In chapter 4 we expand chapter 3 to describe extension of our ideas to neuronal networks composed of many realistic biological spiking neurons that represent either small microcircuits or entire brain regions

    Comparative Connectomics.

    Get PDF
    We introduce comparative connectomics, the quantitative study of cross-species commonalities and variations in brain network topology that aims to discover general principles of network architecture of nervous systems and the identification of species-specific features of brain connectivity. By comparing connectomes derived from simple to more advanced species, we identify two conserved themes of wiring: the tendency to organize network topology into communities that serve specialized functionality and the general drive to enable high topological integration by means of investment of neural resources in short communication paths, hubs, and rich clubs. Within the space of wiring possibilities that conform to these common principles, we argue that differences in connectome organization between closely related species support adaptations in cognition and behavior.We thank Lianne Scholtens, Jim Rilling, Tom Schoenemann for discussions and comments. MPvdH was supported by a VENI (# 451-12-001) grant from the Netherlands Organization for Scientific Research (NWO) and a Fellowship of MQ.This is the author accepted manuscript. The final version is available from Elsevier via https://doi.org/10.1016/j.tics.2016.03.00

    Micro-connectomics: probing the organization of neuronal networks at the cellular scale.

    Get PDF
    Defining the organizational principles of neuronal networks at the cellular scale, or micro-connectomics, is a key challenge of modern neuroscience. In this Review, we focus on graph theoretical parameters of micro-connectome topology, often informed by economical principles that conceptually originated with Ramón y Cajal's conservation laws. First, we summarize results from studies in intact small organisms and in samples from larger nervous systems. We then evaluate the evidence for an economical trade-off between biological cost and functional value in the organization of neuronal networks. Various results suggest that many aspects of neuronal network organization are indeed the outcome of competition between these two fundamental selection pressures.This work was supported by the National Institute of Health Research (NIHR) Cambridge Biomedical Research Centre.This is the author accepted manuscript. It is currently under an indefinite embargo pending publication by the Nature Publishing Group
    • …
    corecore