82 research outputs found

    Functional Brain Oscillations: How Oscillations Facilitate Information Representation and Code Memories

    Get PDF
    The overall aim of the modelling works within this thesis is to lend theoretical evidence to empirical findings from the brain oscillations literature. We therefore hope to solidify and expand the notion that precise spike timing through oscillatory mechanisms facilitates communication, learning, information processing and information representation within the brain. The primary hypothesis of this thesis is that it can be shown computationally that neural de-synchronisations can allow information content to emerge. We do this using two neural network models, the first of which shows how differential rates of neuronal firing can indicate when a single item is being actively represented. The second model expands this notion by creating a complimentary timing mechanism, thus enabling the emergence of qualitive temporal information when a pattern of items is being actively represented. The secondary hypothesis of this thesis is that it can be also be shown computationally that oscillations might play a functional role in learning. Both of the models presented within this thesis propose a sparsely coded and fast learning hippocampal region that engages in the binding of novel episodic information. The first model demonstrates how active cortical representations enable learning to occur in their hippocampal counterparts via a phase-dependent learning rule. The second model expands this notion, creating hierarchical temporal sequences to encode the relative temporal position of cortical representations. We demonstrate in both of these models, how cortical brain oscillations might provide a gating function to the representation of information, whilst complimentary hippocampal oscillations might provide distinct phasic reference points for learning

    The Sync-Fire/deSync Model: modelling the reactivation of dynamic memories from cortical alpha oscillations

    Get PDF
    We propose a neural network model to explore how humans can learn and accurately retrieve temporal sequences, such as melodies, movies, or other dynamic content. We identify target memories by their neural oscillatory signatures, as shown in recent human episodic memory paradigms. Our model comprises three plausible components for the binding of temporal content, where each component imposes unique limitations on the encoding and representation of that content. A cortical component actively represents sequences through the disruption of an intrinsically generated alpha rhythm, where a desynchronisation marks information-rich operations as the literature predicts. A binding component converts each event into a discrete index, enabling repetitions through a sparse encoding of events. A timing component – consisting of an oscillatory “ticking clock” made up of hierarchical synfire chains – discretely indexes a moment in time. By encoding the absolute timing between discretised events, we show how one can use cortical desynchronisations to dynamically detect unique temporal signatures as they are reactivated in the brain. We validate this model by simulating a series of events where sequences are uniquely identifiable by analysing phasic information, as several recent EEG/MEG studies have shown. As such, we show how one can encode and retrieve complete episodic memories where the quality of such memories is modulated by the following: alpha gate keepers to content representation; binding limitations that induce a blink in temporal perception; and nested oscillations that provide preferential learning phases in order to temporally sequence events

    Decorrelation of neural-network activity by inhibitory feedback

    Get PDF
    Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent theoretical and experimental studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. By means of a linear network model and simulations of networks of leaky integrate-and-fire neurons, we show that shared-input correlations are efficiently suppressed by inhibitory feedback. To elucidate the effect of feedback, we compare the responses of the intact recurrent network and systems where the statistics of the feedback channel is perturbed. The suppression of spike-train correlations and population-rate fluctuations by inhibitory feedback can be observed both in purely inhibitory and in excitatory-inhibitory networks. The effect is fully understood by a linear theory and becomes already apparent at the macroscopic level of the population averaged activity. At the microscopic level, shared-input correlations are suppressed by spike-train correlations: In purely inhibitory networks, they are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between excitatory (E) and inhibitory (I) neurons, but a consequence of a particular structure of correlations among the three possible pairings (EE, EI, II)

    A reafferent and feed-forward model of song syntax generation in the Bengalese finch

    Get PDF
    Adult Bengalese finches generate a variable song that obeys a distinct and individual syntax. The syntax is gradually lost over a period of days after deafening and is recovered when hearing is restored. We present a spiking neuronal network model of the song syntax generation and its loss, based on the assumption that the syntax is stored in reafferent connections from the auditory to the motor control area. Propagating synfire activity in the HVC codes for individual syllables of the song and priming signals from the auditory network reduce the competition between syllables to allow only those transitions that are permitted by the syntax. Both imprinting of song syntax within HVC and the interaction of the reafferent signal with an efference copy of the motor command are sufficient to explain the gradual loss of syntax in the absence of auditory feedback. The model also reproduces for the first time experimental findings on the influence of altered auditory feedback on the song syntax generation, and predicts song- and species-specific low frequency components in the LFP. This study illustrates how sequential compositionality following a defined syntax can be realized in networks of spiking neurons

    A Comprehensive Workflow for General-Purpose Neural Modeling with Highly Configurable Neuromorphic Hardware Systems

    Full text link
    In this paper we present a methodological framework that meets novel requirements emerging from upcoming types of accelerated and highly configurable neuromorphic hardware systems. We describe in detail a device with 45 million programmable and dynamic synapses that is currently under development, and we sketch the conceptual challenges that arise from taking this platform into operation. More specifically, we aim at the establishment of this neuromorphic system as a flexible and neuroscientifically valuable modeling tool that can be used by non-hardware-experts. We consider various functional aspects to be crucial for this purpose, and we introduce a consistent workflow with detailed descriptions of all involved modules that implement the suggested steps: The integration of the hardware interface into the simulator-independent model description language PyNN; a fully automated translation between the PyNN domain and appropriate hardware configurations; an executable specification of the future neuromorphic system that can be seamlessly integrated into this biology-to-hardware mapping process as a test bench for all software layers and possible hardware design modifications; an evaluation scheme that deploys models from a dedicated benchmark library, compares the results generated by virtual or prototype hardware devices with reference software simulations and analyzes the differences. The integration of these components into one hardware-software workflow provides an ecosystem for ongoing preparative studies that support the hardware design process and represents the basis for the maturity of the model-to-hardware mapping software. The functionality and flexibility of the latter is proven with a variety of experimental results

    Computing with Synchrony

    Get PDF

    Functional relevance of inhibitory and disinhibitory circuits in signal propagation in recurrent neuronal networks

    Get PDF
    Cell assemblies are considered to be physiological as well as functional units in the brain. A repetitive and stereotypical sequential activation of many neurons was observed, but the mechanisms underlying it are not well understood. Feedforward networks, such as synfire chains, with the pools of excitatory neurons unidirectionally connected and facilitating signal transmission in a cascade-like fashion were proposed to model such sequential activity. When embedded in a recurrent network, these were shown to destabilise the whole network’s activity, challenging the suitability of the model. Here, we investigate a feedforward chain of excitatory pools enriched by inhibitory pools that provide disynaptic feedforward inhibition. We show that when embedded in a recurrent network of spiking neurons, such an augmented chain is capable of robust signal propagation. We then investigate the influence of overlapping two chains on the signal transmission as well as the stability of the host network. While shared excitatory pools turn out to be detrimental to global stability, inhibitory overlap implicitly realises the motif of lateral inhibition, which, if moderate, maintains the stability but if substantial, it silences the whole network activity including the signal. Addition of a disinhibitory pathway along the chain proves to rescue the signal transmission by transforming a strong inhibitory wave into a disinhibitory one, which specifically guards the excitatory pools from receiving excessive inhibition and thereby allowing them to remain responsive to the forthcoming activation. Disinhibitory circuits not only improve the signal transmission, but can also control it via a gating mechanism. We demonstrate that by manipulating a firing threshold of the disinhibitory neurons, the signal transmission can be enabled or completely blocked. This mechanism corresponds to cholinergic modulation, which was shown to be signalled by volume as well as phasic transmission and variably target classes of neurons. Furthermore, we show that modulation of the feedforward inhibition circuit can promote generating spontaneous replay at the absence of external inputs. This mechanism, however, tends to also cause global instabilities. Overall, these results underscore the importance of inhibitory neuron populations in controlling signal propagation in cell assemblies as well as global stability. Specific inhibitory circuits, when controlled by neuromodulatory systems, can robustly guide or block the signals and invoke replay. This mounts to evidence that the population of interneurons is diverse and can be best categorised by neurons’ specific circuit functions as well as their responsiveness to neuromodulators

    Spikes, synchrony, sequences and Schistocerca's sense of smell

    Get PDF

    Repeating Spatial-Temporal Motifs of CA3 Activity Dependent on Engineered Inputs from Dentate Gyrus Neurons in Live Hippocampal Networks.

    Get PDF
    Anatomical and behavioral studies, and in vivo and slice electrophysiology of the hippocampus suggest specific functions of the dentate gyrus (DG) and the CA3 subregions, but the underlying activity dynamics and repeatability of information processing remains poorly understood. To approach this problem, we engineered separate living networks of the DG and CA3 neurons that develop connections through 51 tunnels for axonal communication. Growing these networks on top of an electrode array enabled us to determine whether the subregion dynamics were separable and repeatable. We found spontaneous development of polarized propagation of 80% of the activity in the native direction from DG to CA3 and different spike and burst dynamics for these subregions. Spatial-temporal differences emerged when the relationships of target CA3 activity were categorized with to the number and timing of inputs from the apposing network. Compared to times of CA3 activity when there was no recorded tunnel input, DG input led to CA3 activity bursts that were 7× more frequent, increased in amplitude and extended in temporal envelope. Logistic regression indicated that a high number of tunnel inputs predict CA3 activity with 90% sensitivity and 70% specificity. Compared to no tunnel input, patterns of >80% tunnel inputs from DG specified different patterns of first-to-fire neurons in the CA3 target well. Clustering dendrograms revealed repeating motifs of three or more patterns at up to 17 sites in CA3 that were importantly associated with specific spatial-temporal patterns of tunnel activity. The number of these motifs recorded in 3 min was significantly higher than shuffled spike activity and not seen above chance in control networks in which CA3 was apposed to CA3 or DG to DG. Together, these results demonstrate spontaneous input-dependent repeatable coding of distributed activity in CA3 networks driven by engineered inputs from DG networks. These functional configurations at measured times of activation (motifs) emerge from anatomically accurate feed-forward connections from DG through tunnels to CA3

    Learning spatiotemporal signals using a recurrent spiking network that discretizes time

    Get PDF
    Learning to produce spatiotemporal sequences is a common task that the brain has to solve. The same neural substrate may be used by the brain to produce different sequential behaviours. The way the brain learns and encodes such tasks remains unknown as current computational models do not typically use realistic biologically-plausible learning. Here, we propose a model where a spiking recurrent network of excitatory and inhibitory biophysical neurons drives a read-out layer: the dynamics of the driver recurrent network is trained to encode time which is then mapped through the read-out neurons to encode another dimension, such as space or a phase. Different spatiotemporal patterns can be learned and encoded through the synaptic weights to the read-out neurons that follow common Hebbian learning rules. We demonstrate that the model is able to learn spatiotemporal dynamics on time scales that are behaviourally relevant and we show that the learned sequences are robustly replayed during a regime of spontaneous activity
    corecore