117 research outputs found

    A Bayesian model for identifying hierarchically organised states in neural population activity

    No full text
    Neural population activity in cortical circuits is not solely driven by external inputs, but is also modulated by endogenous states. These cortical states vary on multiple time-scales and also across areas and layers of the neocortex. To understand information processing in cortical circuits, we need to understand the statistical structure of internal states and their interaction with sensory inputs. Here, we present a statistical model for extracting hierarchically organized neural population states from multi-channel recordings of neural spiking activity. We model population states using a hidden Markov decision tree with state-dependent tuning parameters and a generalized linear observation model. Using variational Bayesian inference, we estimate the posterior distribution over parameters from population recordings of neural spike trains. On simulated data, we show that we can identify the underlying sequence of population states over time and reconstruct the ground truth parameters. Using extracellular population recordings from visual cortex, we find that a model with two levels of population states outperforms a generalized linear model which does not include state-dependence, as well as models which only including a binary state. Finally, modelling of state-dependence via our model also improves the accuracy with which sensory stimuli can be decoded from the population response

    Hidden Markov models predict the future choice better than a PSTH-based method

    Get PDF
    Beyond average firing rate, other measurable signals of neuronal activity are fundamental to an understanding of behavior. Recently, hidden Markov models (HMMs) have been applied to neural recordings and have described how neuronal ensembles process information by going through sequences of different states. Such collective dynamics are impossible to capture by just looking at the average firing rate. To estimate how well HMMs can decode information contained in single trials, we compared HMMs with a recently developed classification method based on the peristimulus time histogram (PSTH). The accuracy of the two methods was tested by using the activity of prefrontal neurons recorded while two monkeys were engaged in a strategy task. In this task, the monkeys had to select one of three spatial targets based on an instruction cue and on their previous choice. We show that by using the single trial’s neural activity in a period preceding action execution, both models were able to classify the monkeys’ choice with an accuracy higher than by chance. Moreover, the HMM was significantly more accurate than the PSTH-based method, even in cases in which the HMM performance was low, although always above chance. Furthermore, the accuracy of both methods was related to the number of neurons exhibiting spatial selectivity within an experimental session. Overall, our study shows that neural activity is better described when not only the mean activity of individual neurons is considered and that therefore, the study of other signals rather than only the average firing rate is fundamental to an understanding of the dynamics of neuronal ensembles

    Network Plasticity as Bayesian Inference

    Full text link
    General results from statistical learning theory suggest to understand not only brain computations, but also brain plasticity as probabilistic inference. But a model for that has been missing. We propose that inherently stochastic features of synaptic plasticity and spine motility enable cortical networks of neurons to carry out probabilistic inference by sampling from a posterior distribution of network configurations. This model provides a viable alternative to existing models that propose convergence of parameters to maximum likelihood values. It explains how priors on weight distributions and connection probabilities can be merged optimally with learned experience, how cortical networks can generalize learned information so well to novel experiences, and how they can compensate continuously for unforeseen disturbances of the network. The resulting new theory of network plasticity explains from a functional perspective a number of experimental data on stochastic aspects of synaptic plasticity that previously appeared to be quite puzzling.Comment: 33 pages, 5 figures, the supplement is available on the author's web page http://www.igi.tugraz.at/kappe

    One-hot Generalized Linear Model for Switching Brain State Discovery

    Full text link
    Exposing meaningful and interpretable neural interactions is critical to understanding neural circuits. Inferred neural interactions from neural signals primarily reflect functional interactions. In a long experiment, subject animals may experience different stages defined by the experiment, stimuli, or behavioral states, and hence functional interactions can change over time. To model dynamically changing functional interactions, prior work employs state-switching generalized linear models with hidden Markov models (i.e., HMM-GLMs). However, we argue they lack biological plausibility, as functional interactions are shaped and confined by the underlying anatomical connectome. Here, we propose a novel prior-informed state-switching GLM. We introduce both a Gaussian prior and a one-hot prior over the GLM in each state. The priors are learnable. We will show that the learned prior should capture the state-constant interaction, shedding light on the underlying anatomical connectome and revealing more likely physical neuron interactions. The state-dependent interaction modeled by each GLM offers traceability to capture functional variations across multiple brain states. Our methods effectively recover true interaction structures in simulated data, achieve the highest predictive likelihood with real neural datasets, and render interaction structures and hidden states more interpretable when applied to real neural data

    Contributions of synaptic filters to models of synaptically stored memory

    No full text
    The question of how neural systems encode memories in one-shot without immediately disrupting previously stored information has puzzled theoretical neuroscientists for years and it is the central topic of this thesis. Previous attempts on this topic, have proposed that synapses probabilistically update in response to plasticity inducing stimuli to effectively delay the degradation of old memories in the face of ongoing memory storage. Indeed, experiments have shown that synapses do not immediately respond to plasticity inducing stimuli, since these must be presented many times before synaptic plasticity is expressed. Such a delay could be due to the stochastic nature of synaptic plasticity or perhaps because induction signals are integrated before overt strength changes occur.The later approach has been previously applied to control fluctuations in neural development by low-pass filtering induction signals before plasticity is expressed. In this thesis we consider memory dynamics in a mathematical model with synapses that integrate plasticity induction signals to a threshold before expressing plasticity. We report novel recall dynamics and considerable improvements in memory lifetimes against a prominent model of synaptically stored memory. With integrating synapses the memory trace initially rises before reaching a maximum and then falls. The memory signal dissociates into separate oblivescence and reminiscence components, with reminiscence initially dominating recall. Furthermore, we find that integrating synapses possess natural timescales that can be used to consider the transition to late-phase plasticity under spaced repetition patterns known to lead to optimal storage conditions. We find that threshold crossing statistics differentiate between massed and spaced memory repetition patterns. However, isolated integrative synapses obtain an insufficient statistical sample to detect the stimulation pattern within a few memory repetitions. We extend the modelto consider the cooperation of well-known intracellular signalling pathways in detecting storage conditions by utilizing the profile of postsynaptic depolarization. We find that neuron wide signalling and local synaptic signals can be combined to detect optimal storage conditions that lead to stable forms of plasticity in a synapse specific manner.These models can be further extended to consider heterosynaptic and neuromodulatory interactions for late-phase plasticity.<br/
    • …
    corecore