510 research outputs found

    Towards a Brain-inspired Information Processing System: Modelling and Analysis of Synaptic Dynamics: Towards a Brain-inspired InformationProcessing System: Modelling and Analysis ofSynaptic Dynamics

    Get PDF
    Biological neural systems (BNS) in general and the central nervous system (CNS) specifically exhibit a strikingly efficient computational power along with an extreme flexible and adaptive basis for acquiring and integrating new knowledge. Acquiring more insights into the actual mechanisms of information processing within the BNS and their computational capabilities is a core objective of modern computer science, computational sciences and neuroscience. Among the main reasons of this tendency to understand the brain is to help in improving the quality of life of people suffer from loss (either partial or complete) of brain or spinal cord functions. Brain-computer-interfaces (BCI), neural prostheses and other similar approaches are potential solutions either to help these patients through therapy or to push the progress in rehabilitation. There is however a significant lack of knowledge regarding the basic information processing within the CNS. Without a better understanding of the fundamental operations or sequences leading to cognitive abilities, applications like BCI or neural prostheses will keep struggling to find a proper and systematic way to help patients in this regard. In order to have more insights into these basic information processing methods, this thesis presents an approach that makes a formal distinction between the essence of being intelligent (as for the brain) and the classical class of artificial intelligence, e.g. with expert systems. This approach investigates the underlying mechanisms allowing the CNS to be capable of performing a massive amount of computational tasks with a sustainable efficiency and flexibility. This is the essence of being intelligent, i.e. being able to learn, adapt and to invent. The approach used in the thesis at hands is based on the hypothesis that the brain or specifically a biological neural circuitry in the CNS is a dynamic system (network) that features emergent capabilities. These capabilities can be imported into spiking neural networks (SNN) by emulating the dynamic neural system. Emulating the dynamic system requires simulating both the inner workings of the system and the framework of performing the information processing tasks. Thus, this work comprises two main parts. The first part is concerned with introducing a proper and a novel dynamic synaptic model as a vital constitute of the inner workings of the dynamic neural system. This model represents a balanced integration between the needed biophysical details and being computationally inexpensive. Being a biophysical model is important to allow for the abilities of the target dynamic system to be inherited, and being simple is needed to allow for further implementation in large scale simulations and for hardware implementation in the future. Besides, the energy related aspects of synaptic dynamics are studied and linked to the behaviour of the networks seeking for stable states of activities. The second part of the thesis is consequently concerned with importing the processing framework of the dynamic system into the environment of SNN. This part of the study investigates the well established concept of binding by synchrony to solve the information binding problem and to proposes the concept of synchrony states within SNN. The concepts of computing with states are extended to investigate a computational model that is based on the finite-state machines and reservoir computing. Biological plausible validations of the introduced model and frameworks are performed. Results and discussions of these validations indicate that this study presents a significant advance on the way of empowering the knowledge about the mechanisms underpinning the computational power of CNS. Furthermore it shows a roadmap on how to adopt the biological computational capabilities in computation science in general and in biologically-inspired spiking neural networks in specific. Large scale simulations and the development of neuromorphic hardware are work-in-progress and future work. Among the applications of the introduced work are neural prostheses and bionic automation systems

    The spectro-contextual encoding and retrieval theory of episodic memory.

    Get PDF
    The spectral fingerprint hypothesis, which posits that different frequencies of oscillations underlie different cognitive operations, provides one account for how interactions between brain regions support perceptual and attentive processes (Siegel etal., 2012). Here, we explore and extend this idea to the domain of human episodic memory encoding and retrieval. Incorporating findings from the synaptic to cognitive levels of organization, we argue that spectrally precise cross-frequency coupling and phase-synchronization promote the formation of hippocampal-neocortical cell assemblies that form the basis for episodic memory. We suggest that both cell assembly firing patterns as well as the global pattern of brain oscillatory activity within hippocampal-neocortical networks represents the contents of a particular memory. Drawing upon the ideas of context reinstatement and multiple trace theory, we argue that memory retrieval is driven by internal and/or external factors which recreate these frequency-specific oscillatory patterns which occur during episodic encoding. These ideas are synthesized into a novel model of episodic memory (the spectro-contextual encoding and retrieval theory, or "SCERT") that provides several testable predictions for future research

    Brain at work : time, sparseness and superposition principles

    Get PDF
    Abstract : Many studies explored mechanisms through which the brain encodes sensory inputs allowing a coherent behavior. The brain could identify stimuli via a hierarchical stream of activity leading to a cardinal neuron responsive to one particular object. The opportunity to record from numerous neurons offered investigators the capability of examining simultaneously the functioning of many cells. These approaches suggested encoding processes that are parallel rather than serial. Binding the many features of a stimulus may be accomplished through an induced synchronization of cell’s action potentials. These interpretations are supported by experimental data and offer many advantages but also several shortcomings. We argue for a coding mechanism based on a sparse synchronization paradigm. We show that synchronization of spikes is a fast and efficient mode to encode the representation of objects based on feature bindings. We introduce the view that sparse synchronization coding presents an interesting venue in probing brain encoding mechanisms as it allows the functional establishment of multilayered and time-conditioned neuronal networks or multislice networks. We propose a model based on integrate-and-fire spiking neurons

    A Model of Stimulus-Specific Neural Assemblies in the Insect Antennal Lobe

    Get PDF
    It has been proposed that synchronized neural assemblies in the antennal lobe of insects encode the identity of olfactory stimuli. In response to an odor, some projection neurons exhibit synchronous firing, phase-locked to the oscillations of the field potential, whereas others do not. Experimental data indicate that neural synchronization and field oscillations are induced by fast GABAA-type inhibition, but it remains unclear how desynchronization occurs. We hypothesize that slow inhibition plays a key role in desynchronizing projection neurons. Because synaptic noise is believed to be the dominant factor that limits neuronal reliability, we consider a computational model of the antennal lobe in which a population of oscillatory neurons interact through unreliable GABAA and GABAB inhibitory synapses. From theoretical analysis and extensive computer simulations, we show that transmission failures at slow GABAB synapses make the neural response unpredictable. Depending on the balance between GABAA and GABAB inputs, particular neurons may either synchronize or desynchronize. These findings suggest a wiring scheme that triggers stimulus-specific synchronized assemblies. Inhibitory connections are set by Hebbian learning and selectively activated by stimulus patterns to form a spiking associative memory whose storage capacity is comparable to that of classical binary-coded models. We conclude that fast inhibition acts in concert with slow inhibition to reformat the glomerular input into odor-specific synchronized neural assemblies

    Variable binding by synaptic strength change

    Get PDF
    Variable binding is a difficult problem for neural networks. Two new mechanisms for binding by synaptic change are presented, and in both, bindings are erased and can be reused. The first is based on the commonly used learning mechanism of permanent change of synaptic weight, and the second on synaptic change which decays. Both are biologically motivated models. Simulations of binding on a paired association task are shown with the first mechanism succeeding with a 97.5% F-Score, and the second performing perfectly. Further simulations show that binding by decaying synaptic change copes with cross talk, and can be used for compositional semantics. It can be inferred that binding by permanent change accounts for these, but it faces the stability plasticity dilemma. Two other existing binding mechanism, synchrony and active links, are compatible with these new mechanisms. All four mechanisms are compared and integrated in a Cell Assembly theory

    Representational Switching by Dynamical Reorganization of Attractor Structure in a Network Model of the Prefrontal Cortex

    Get PDF
    The prefrontal cortex (PFC) plays a crucial role in flexible cognitive behavior by representing task relevant information with its working memory. The working memory with sustained neural activity is described as a neural dynamical system composed of multiple attractors, each attractor of which corresponds to an active state of a cell assembly, representing a fragment of information. Recent studies have revealed that the PFC not only represents multiple sets of information but also switches multiple representations and transforms a set of information to another set depending on a given task context. This representational switching between different sets of information is possibly generated endogenously by flexible network dynamics but details of underlying mechanisms are unclear. Here we propose a dynamically reorganizable attractor network model based on certain internal changes in synaptic connectivity, or short-term plasticity. We construct a network model based on a spiking neuron model with dynamical synapses, which can qualitatively reproduce experimentally demonstrated representational switching in the PFC when a monkey was performing a goal-oriented action-planning task. The model holds multiple sets of information that are required for action planning before and after representational switching by reconfiguration of functional cell assemblies. Furthermore, we analyzed population dynamics of this model with a mean field model and show that the changes in cell assemblies' configuration correspond to those in attractor structure that can be viewed as a bifurcation process of the dynamical system. This dynamical reorganization of a neural network could be a key to uncovering the mechanism of flexible information processing in the PFC

    Complex Events Initiated by Individual Spikes in the Human Cerebral Cortex

    Get PDF
    Synaptic interactions between neurons of the human cerebral cortex were not directly studied to date. We recorded the first dataset, to our knowledge, on the synaptic effect of identified human pyramidal cells on various types of postsynaptic neurons and reveal complex events triggered by individual action potentials in the human neocortical network. Brain slices were prepared from nonpathological samples of cortex that had to be removed for the surgical treatment of brain areas beneath association cortices of 58 patients aged 18 to 73 y. Simultaneous triple and quadruple whole-cell patch clamp recordings were performed testing mono- and polysynaptic potentials in target neurons following a single action potential fired by layer 2/3 pyramidal cells, and the temporal structure of events and underlying mechanisms were analyzed. In addition to monosynaptic postsynaptic potentials, individual action potentials in presynaptic pyramidal cells initiated long-lasting (37 ± 17 ms) sequences of events in the network lasting an order of magnitude longer than detected previously in other species. These event series were composed of specifically alternating glutamatergic and GABAergic postsynaptic potentials and required selective spike-to-spike coupling from pyramidal cells to GABAergic interneurons producing concomitant inhibitory as well as excitatory feed-forward action of GABA. Single action potentials of human neurons are sufficient to recruit Hebbian-like neuronal assemblies that are proposed to participate in cognitive processes

    Neural models of learning and visual grouping in the presence of finite conduction velocities

    Get PDF
    The hypothesis of object binding-by-synchronization in the visual cortex has been supported by recent experiments in awake monkeys. They demonstrated coherence among gamma-activities (30–90 Hz) of local neural groups and its perceptual modulation according to the rules of figure-ground segregation. Interactions within and between these neural groups are based on axonal spike conduction with finite velocities. Physiological studies confirmed that the majority of transmission delays is comparable to the temporal scale defined by gamma-activity (11–33 ms). How do these finite velocities influence the development of synaptic connections within and between visual areas? What is the relationship between the range of gamma-coherence and the velocity of signal transmission? Are these large temporal delays compatible with recently discovered phenomenon of gamma-waves traveling across larger parts of the primary visual cortex? The refinement of connections in the immature visual cortex depends on temporal Hebbian learning to adjust synaptic efficacies between spiking neurons. The impact of constant, finite, axonal spike conduction velocities on this process was investigated using a set of topographic network models. Random spike trains with a confined temporal correlation width mimicked cortical activity before visual experience. After learning, the lateral connectivity within one network layer became spatially restricted, the width of the connection profile being directly proportional to the lateral conduction velocity. Furthermore, restricted feedforward divergence developed between neurons of two successive layers. The size of this connection profile matched the lateral connection profile of the lower layer neuron. The mechanism in this network model is suitable to explain the emergence of larger receptive fields at higher visual areas while preserving a retinotopic mapping. The influence of finite conduction velocities on the local generation of gamma-activities and their spatial synchronization was investigated in a model of a mature visual area. Sustained input and local inhibitory feedback was sufficient for the emergence of coherent gamma-activity that extended across few millimeters. Conduction velocities had a direct impact on the frequency of gamma-oscillations, but did neither affect gamma-power nor the spatial extent of gamma-coherence. Adding long-range horizontal connections between excitatory neurons, as found in layer 2/3 of the primary visual cortex, increased the spatial range of gamma-coherence. The range was maximal for zero transmission delays, and for all distances attenuated with finite, decreasing lateral conduction velocities. Below a velocity of 0.5 m/s, gamma-power and gamma-coherence were even smaller than without these connections at all, i.e., slow horizontal connections actively desynchronized neural populations. In conclusion, the enhancement of gamma-coherence by horizontal excitatory connections critically depends on fast conduction velocities. Coherent gamma-activity in the primary visual cortex and the accompanying models was found to only cover small regions of the visual field. This challenges the role of gamma-synchronization to solve the binding problem for larger object representations. Further analysis of the previous model revealed that the patches of coherent gamma-activity (1.8 mm half-height decline) were part of more globally occurring gamma-waves, which coupled over much larger distances (6.3 mm half-height decline). The model gamma-waves observed here are very similar to those found in the primary visual cortex of awake monkeys, indicating that local recurrent inhibition and restricted horizontal connections with finite axonal velocities are sufficient requirements for their emergence. In conclusion, since the model is in accordance with the connectivity and gamma-processes in the primary visual cortex, the results support the hypothesis that gamma-waves provide a generalized concept for object binding in the visual cortex

    Formation of feedforward networks and frequency synchrony by spike-timing-dependent plasticity

    Get PDF
    Spike-timing-dependent plasticity (STDP) with asymmetric learning windows is commonly found in the brain and useful for a variety of spike-based computations such as input filtering and associative memory. A natural consequence of STDP is establishment of causality in the sense that a neuron learns to fire with a lag after specific presynaptic neurons have fired. The effect of STDP on synchrony is elusive because spike synchrony implies unitary spike events of different neurons rather than a causal delayed relationship between neurons. We explore how synchrony can be facilitated by STDP in oscillator networks with a pacemaker. We show that STDP with asymmetric learning windows leads to self-organization of feedforward networks starting from the pacemaker. As a result, STDP drastically facilitates frequency synchrony. Even though differences in spike times are lessened as a result of synaptic plasticity, the finite time lag remains so that perfect spike synchrony is not realized. In contrast to traditional mechanisms of large-scale synchrony based on mutual interaction of coupled neurons, the route to synchrony discovered here is enslavement of downstream neurons by upstream ones. Facilitation of such feedforward synchrony does not occur for STDP with symmetric learning windows.Comment: 9 figure

    The Local Field Potential Reflects Surplus Spike Synchrony

    Get PDF
    The oscillatory nature of the cortical local field potential (LFP) is commonly interpreted as a reflection of synchronized network activity, but its relationship to observed transient coincident firing of neurons on the millisecond time-scale remains unclear. Here we present experimental evidence to reconcile the notions of synchrony at the level of neuronal spiking and at the mesoscopic scale. We demonstrate that only in time intervals of excess spike synchrony, coincident spikes are better entrained to the LFP than predicted by the locking of the individual spikes. This effect is enhanced in periods of large LFP amplitudes. A quantitative model explains the LFP dynamics by the orchestrated spiking activity in neuronal groups that contribute the observed surplus synchrony. From the correlation analysis, we infer that neurons participate in different constellations but contribute only a fraction of their spikes to temporally precise spike configurations, suggesting a dual coding scheme of rate and synchrony. This finding provides direct evidence for the hypothesized relation that precise spike synchrony constitutes a major temporally and spatially organized component of the LFP. Revealing that transient spike synchronization correlates not only with behavior, but with a mesoscopic brain signal corroborates its relevance in cortical processing.Comment: 45 pages, 8 figures, 3 supplemental figure
    corecore