189 research outputs found

    Network Plasticity as Bayesian Inference

    Full text link
    General results from statistical learning theory suggest to understand not only brain computations, but also brain plasticity as probabilistic inference. But a model for that has been missing. We propose that inherently stochastic features of synaptic plasticity and spine motility enable cortical networks of neurons to carry out probabilistic inference by sampling from a posterior distribution of network configurations. This model provides a viable alternative to existing models that propose convergence of parameters to maximum likelihood values. It explains how priors on weight distributions and connection probabilities can be merged optimally with learned experience, how cortical networks can generalize learned information so well to novel experiences, and how they can compensate continuously for unforeseen disturbances of the network. The resulting new theory of network plasticity explains from a functional perspective a number of experimental data on stochastic aspects of synaptic plasticity that previously appeared to be quite puzzling.Comment: 33 pages, 5 figures, the supplement is available on the author's web page http://www.igi.tugraz.at/kappe

    Information processing in biological complex systems: a view to bacterial and neural complexity

    Get PDF
    This thesis is a study of information processing of biological complex systems seen from the perspective of dynamical complexity (the degree of statistical independence of a system as a whole with respect to its components due to its causal structure). In particular, we investigate the influence of signaling functions in cell-to-cell communication in bacterial and neural systems. For each case, we determine the spatial and causal dependencies in the system dynamics from an information-theoretic point of view and we relate it with their physiological capabilities. The main research content is presented into three main chapters. First, we study a previous theoretical work on synchronization, multi-stability, and clustering of a population of coupled synthetic genetic oscillators via quorum sensing. We provide an extensive numerical analysis of the spatio-temporal interactions, and determine conditions in which the causal structure of the system leads to high dynamical complexity in terms of associated metrics. Our results indicate that this complexity is maximally receptive at transitions between dynamical regimes, and maximized for transient multi-cluster oscillations associated with chaotic behaviour. Next, we introduce a model of a neuron-astrocyte network with bidirectional coupling using glutamate-induced calcium signaling. This study is focused on the impact of the astrocyte-mediated potentiation on synaptic transmission. Our findings suggest that the information generated by the joint activity of the population of neurons is irreducible to its independent contribution due to the role of astrocytes. We relate these results with the shared information modulated by the spike synchronization imposed by the bidirectional feedback between neurons and astrocytes. It is shown that the dynamical complexity is maximized when there is a balance between the spike correlation and spontaneous spiking activity. Finally, the previous observations on neuron-glial signaling are extended to a large-scale system with community structure. Here we use a multi-scale approach to account for spatiotemporal features of astrocytic signaling coupled with clusters of neurons. We investigate the interplay of astrocytes and spiking-time-dependent-plasticity at local and global scales in the emergence of complexity and neuronal synchronization. We demonstrate the utility of astrocytes and learning in improving the encoding of external stimuli as well as its ability to favour the integration of information at synaptic timescales to exhibit a high intrinsic causal structure at the system level. Our proposed approach and observations point to potential effects of the astrocytes for sustaining more complex information processing in the neural circuitry

    Dual Roles for Spike Signaling in Cortical Neural Populations

    Get PDF
    A prominent feature of signaling in cortical neurons is that of randomness in the action potential. The output of a typical pyramidal cell can be well fit with a Poisson model, and variations in the Poisson rate repeatedly have been shown to be correlated with stimuli. However while the rate provides a very useful characterization of neural spike data, it may not be the most fundamental description of the signaling code. Recent data showing Îł frequency range multi-cell action potential correlations, together with spike timing dependent plasticity, are spurring a re-examination of the classical model, since precise timing codes imply that the generation of spikes is essentially deterministic. Could the observed Poisson randomness and timing determinism reflect two separate modes of communication, or do they somehow derive from a single process? We investigate in a timing-based model whether the apparent incompatibility between these probabilistic and deterministic observations may be resolved by examining how spikes could be used in the underlying neural circuits. The crucial component of this model draws on dual roles for spike signaling. In learning receptive fields from ensembles of inputs, spikes need to behave probabilistically, whereas for fast signaling of individual stimuli, the spikes need to behave deterministically. Our simulations show that this combination is possible if deterministic signals using Îł latency coding are probabilistically routed through different members of a cortical cell population at different times. This model exhibits standard features characteristic of Poisson models such as orientation tuning and exponential interval histograms. In addition, it makes testable predictions that follow from the Îł latency coding

    Spatiotemporal Computations of an Excitable and Plastic Brain: Neuronal Plasticity Leads to Noise-Robust and Noise-Constructive Computations

    Get PDF
    It is a long-established fact that neuronal plasticity occupies the central role in generating neural function and computation. Nevertheless, no unifying account exists of how neurons in a recurrent cortical network learn to compute on temporally and spatially extended stimuli. However, these stimuli constitute the norm, rather than the exception, of the brain's input. Here, we introduce a geometric theory of learning spatiotemporal computations through neuronal plasticity. To that end, we rigorously formulate the problem of neural representations as a relation in space between stimulus-induced neural activity and the asymptotic dynamics of excitable cortical networks. Backed up by computer simulations and numerical analysis, we show that two canonical and widely spread forms of neuronal plasticity, that is, spike-timing-dependent synaptic plasticity and intrinsic plasticity, are both necessary for creating neural representations, such that these computations become realizable. Interestingly, the effects of these forms of plasticity on the emerging neural code relate to properties necessary for both combating and utilizing noise. The neural dynamics also exhibits features of the most likely stimulus in the network's spontaneous activity. These properties of the spatiotemporal neural code resulting from plasticity, having their grounding in nature, further consolidate the biological relevance of our findings

    Dynamical principles in neuroscience

    Full text link
    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and FundaciĂłn BBVA

    Memory capacity in the hippocampus

    Get PDF
    Neural assemblies in hippocampus encode positions. During rest, the hippocam- pus replays sequences of neural activity seen during awake behavior. This replay is linked to memory consolidation and mental exploration of the environment. Re- current networks can be used to model the replay of sequential activity. Multiple sequences can be stored in the synaptic connections. To achieve a high mem- ory capacity, recurrent networks require a pattern separation mechanism. Such a mechanism is global remapping, observed in place cell populations. A place cell fires at a particular position of an environment and is silent elsewhere. Multiple place cells usually cover an environment with their firing fields. Small changes in the environment or context of a behavioral task can cause global remapping, i.e. profound changes in place cell firing fields. Global remapping causes some cells to cease firing, other silent cells to gain a place field, and other place cells to move their firing field and change their peak firing rate. The effect is strong enough to make global remapping a viable pattern separation mechanism. We model two mechanisms that improve the memory capacity of recurrent net- works. The effect of inhibition on replay in a recurrent network is modeled using binary neurons and binary synapses. A mean field approximation is used to de- termine the optimal parameters for the inhibitory neuron population. Numerical simulations of the full model were carried out to verify the predictions of the mean field model. A second model analyzes a hypothesized global remapping mecha- nism, in which grid cell firing is used as feed forward input to place cells. Grid cells have multiple firing fields in the same environment, arranged in a hexagonal grid. Grid cells can be used in a model as feed forward inputs to place cells to produce place fields. In these grid-to-place cell models, shifts in the grid cell firing patterns cause remapping in the place cell population. We analyze the capacity of such a system to create sets of separated patterns, i.e. how many different spatial codes can be generated. The limiting factor are the synapses connecting grid cells to place cells. To assess their capacity, we produce different place codes in place and grid cell populations, by shuffling place field positions and shifting grid fields of grid cells. Then we use Hebbian learning to increase the synaptic weights be- tween grid and place cells for each set of grid and place code. The capacity limit is reached when synaptic interference makes it impossible to produce a place code with sufficient spatial acuity from grid cell firing. Additionally, it is desired to also maintain the place fields compact, or sparse if seen from a coding standpoint. Of course, as more environments are stored, the sparseness is lost. Interestingly, place cells lose the sparseness of their firing fields much earlier than their spatial acuity. For the sequence replay model we are able to increase capacity in a simulated recurrent network by including an inhibitory population. We show that even in this more complicated case, capacity is improved. We observe oscillations in the average activity of both excitatory and inhibitory neuron populations. The oscillations get stronger at the capacity limit. In addition, at the capacity limit, rather than observing a sudden failure of replay, we find sequences are replayed transiently for a couple of time steps before failing. Analyzing the remapping model, we find that, as we store more spatial codes in the synapses, first the sparseness of place fields is lost. Only later do we observe a decay in spatial acuity of the code. We found two ways to maintain sparse place fields while achieving a high capacity: inhibition between place cells, and partitioning the place cell population so that learning affects only a small fraction of them in each environment. We present scaling predictions that suggest that hundreds of thousands of spatial codes can be produced by this pattern separation mechanism. The effect inhibition has on the replay model is two-fold. Capacity is increased, and the graceful transition from full replay to failure allows for higher capacities when using short sequences. Additional mechanisms not explored in this model could be at work to concatenate these short sequences, or could perform more complex operations on them. The interplay of excitatory and inhibitory populations gives rise to oscillations, which are strongest at the capacity limit. The oscillation draws a picture of how a memory mechanism can cause hippocampal oscillations as observed in experiments. In the remapping model we showed that sparseness of place cell firing is constraining the capacity of this pattern separation mechanism. Grid codes outperform place codes regarding spatial acuity, as shown in Mathis et al. (2012). Our model shows that the grid-to-place transformation is not harnessing the full spatial information from the grid code in order to maintain sparse place fields. This suggests that the two codes are independent, and communication between the areas might be mostly for synchronization. High spatial acuity seems to be a specialization of the grid code, while the place code is more suitable for memory tasks. In a detailed model of hippocampal replay we show that feedback inhibition can increase the number of sequences that can be replayed. The effect of inhibition on capacity is determined using a meanfield model, and the results are verified with numerical simulations of the full network. Transient replay is found at the capacity limit, accompanied by oscillations that resemble sharp wave ripples in hippocampus. In a second model Hippocampal replay of neuronal activity is linked to memory consolidation and mental exploration. Furthermore, replay is a potential neural correlate of episodic memory. To model hippocampal sequence replay, recurrent neural networks are used. Memory capacity of such networks is of great interest to determine their biological feasibility. And additionally, any mechanism that improves capacity has explanatory power. We investigate two such mechanisms. The first mechanism to improve capacity is global, unspecific feedback inhibition for the recurrent network. In a simplified meanfield model we show that capacity is indeed improved. The second mechanism that increases memory capacity is pattern separation. In the spatial context of hippocampal place cell firing, global remapping is one way to achieve pattern separation. Changes in the environment or context of a task cause global remapping. During global remapping, place cell firing changes in unpredictable ways: cells shift their place fields, or fully cease firing, and formerly silent cells acquire place fields. Global remapping can be triggered by subtle changes in grid cells that give feed-forward inputs to hippocampal place cells. We investigate the capacity of the underlying synaptic connections, defined as the number of different environments that can be represented at a given spatial acuity. We find two essential conditions to achieve a high capacity and sparse place fields: inhibition between place cells, and partitioning the place cell population so that learning affects only a small fraction of them in each environments. We also find that sparsity of place fields is the constraining factor of the model rather than spatial acuity. Since the hippocampal place code is sparse, we conclude that the hippocampus does not fully harness the spatial information available in the grid code. The two codes of space might thus serve different purposes

    Towards a unified approach

    Get PDF
    "Decision-making in the presence of uncertainty is a pervasive computation. Latent variable decoding—inferring hidden causes underlying visible effects—is commonly observed in nature, and it is an unsolved challenge in modern machine learning. On many occasions, animals need to base their choices on uncertain evidence; for instance, when deciding whether to approach or avoid an obfuscated visual stimulus that could be either a prey or a predator. Yet, their strategies are, in general, poorly understood. In simple cases, these problems admit an optimal, explicit solution. However, in more complex real-life scenarios, it is difficult to determine the best possible behavior. The most common approach in modern machine learning relies on artificial neural networks—black boxes that map each input to an output. This input-output mapping depends on a large number of parameters, the weights of the synaptic connections, which are optimized during learning.(...)
    • …
    corecore