230 research outputs found

    Computational neuroimaging strategies for single patient predictions

    Get PDF
    AbstractNeuroimaging increasingly exploits machine learning techniques in an attempt to achieve clinically relevant single-subject predictions. An alternative to machine learning, which tries to establish predictive links between features of the observed data and clinical variables, is the deployment of computational models for inferring on the (patho)physiological and cognitive mechanisms that generate behavioural and neuroimaging responses. This paper discusses the rationale behind a computational approach to neuroimaging-based single-subject inference, focusing on its potential for characterising disease mechanisms in individual subjects and mapping these characterisations to clinical predictions. Following an overview of two main approaches – Bayesian model selection and generative embedding – which can link computational models to individual predictions, we review how these methods accommodate heterogeneity in psychiatric and neurological spectrum disorders, help avoid erroneous interpretations of neuroimaging data, and establish a link between a mechanistic, model-based approach and the statistical perspectives afforded by machine learning

    The emergence of synchrony in networks of mutually inferring neurons

    Get PDF
    This paper considers the emergence of a generalised synchrony in ensembles of coupled self-organising systems, such as neurons. We start from the premise that any self-organising system complies with the free energy principle, in virtue of placing an upper bound on its entropy. Crucially, the free energy principle allows one to interpret biological systems as inferring the state of their environment or external milieu. An emergent property of this inference is synchronisation among an ensemble of systems that infer each other. Here, we investigate the implications of neuronal dynamics by simulating neuronal networks, where each neuron minimises its free energy. We cast the ensuing ensemble dynamics in terms of inference and show that cardinal behaviours of neuronal networks – both in vivo and in vitro – can be explained by this framework. In particular, we test the hypotheses that (i) generalised synchrony is an emergent property of free energy minimisation; thereby explaining synchronisation in the resting brain: (ii) desynchronisation is induced by exogenous input; thereby explaining event-related desynchronisation and (iii) structure learning emerges in response to causal structure in exogenous input; thereby explaining functional segregation in real neuronal systems

    Multi-Scale Information, Network, Causality, and Dynamics: Mathematical Computation and Bayesian Inference to Cognitive Neuroscience and Aging

    Get PDF
    The human brain is estimated to contain 100 billion or so neurons and 10 thousand times as many connections. Neurons never function in isolation: each of them is connected to 10, 000 others and they interact extensively every millisecond. Brain cells are organized into neural circuits often in a dynamic way, processing specific types of information and providing th

    Learning with Surprise:Theory and Applications

    Get PDF
    Everybody knows what it feels to be surprised. Surprise raises our attention and is crucial for learning. It is a ubiquitous concept whose traces have been found in both neuroscience and machine learning. However, a comprehensive theory has not yet been developed that addresses fundamental problems about surprise: (1) surprise is difficult to quantify. How should we measure the level of surprise when we encounter an unexpected event? What is the link between surprise and startle responses in behavioral biology? (2) the key role of surprise in learning is somewhat unclear. We believe that surprise drives attention and modifies learning; but, how should surprise be incorporated, in general paradigms of learning? and (3) can we develop a biologically plausible theory that explains how surprise can be neurally calculated and implemented in the brain? I propose a theoretical framework to address the above issues about surprise. There are three components to this framework: (1) a subjective confidence-adjusted measure of surprise, that can be used for quantification purposes, (2) a surprise-minimization learning rule that models the role of surprise in learning by balancing the relative contribution of new and old data for inference about the world, and (3) a surprise-modulated Hebbian plasticity rule that can be implemented in both artificial and spiking neural networks. The proposed online rule links surprise to the activity of the neuromodulatory system in the brain, and belongs to the class of neo-Hebbian plasticity rules. My work on the foundations of surprise provides a suitable framework for future studies on learning with surprise. Reinforcement learning methods can be enhanced by incorporating the proposed theory of surprise. The theory could ultimately become interesting for the analysis of fMRI and EEG data. It may also inspire new synaptic plasticity rules that are under the simultaneous control of reward and surprise. Moreover, the proposed theory can be used to make testable predictions about the time course of the neural substrate of surprise (e.g., noradrenaline), and suggests behavioral experiments that can be performed on real animals for studying surprise-related neural activity

    Statistical approaches for synaptic characterization

    Get PDF
    Synapses are fascinatingly complex transmission units. One of the fundamental features of synaptic transmission is its stochasticity, as neurotransmitter release exhibits variability and possible failures. It is also quantised: postsynaptic responses to presynaptic stimulations are built up of several and similar quanta of current, each of them arising from the release of one presynaptic vesicle. Moreover, they are dynamic transmission units, as their activity depends on the history of previous spikes and stimulations, a phenomenon known as synaptic plasticity. Finally, synapses exhibit a very broad range of dynamics, features, and connection strengths, depending on neuromodulators concentration [5], the age of the subject [6], their localization in the CNS or in the PNS, or the type of neurons [7]. Addressing the complexity of synaptic transmission is a relevant problem for both biologists and theoretical neuroscientists. From a biological perspective, a finer understanding of transmission mechanisms would allow to study possibly synapse-related diseases, or to determine the locus of plasticity and homeostasis. From a theoretical perspective, different normative explanations for synaptic stochasticity have been proposed, including its possible role in uncertainty encoding, energy-efficient computation, or generalization while learning. A precise description of synaptic transmission will be critical for the validation of these theories and for understanding the functional relevance of this probabilistic and dynamical release. A central issue, which is common to all these areas of research, is the problem of synaptic characterization. Synaptic characterization (also called synaptic interrogation [8]) refers to a set of methods for exploring synaptic functions, inferring the value of synaptic parameters, and assessing features such as plasticity and modes of release. This doctoral work sits at the crossroads of experimental and theoretical neuroscience: its main aim is to develop statistical tools and methods to improve synaptic characterization, and hence to bring quantitative solutions to biological questions. In this thesis, we focus on model-based approaches to quantify synaptic transmission, for which different methods are reviewed in Chapter 3. By fitting a generative model of postsynaptic currents to experimental data, it is possible to infer the value of the synapse’s parameters. By performing model selection, we can compare different modelizations of a synapse and thus quantify its features. The main goal of this thesis is thus to develop theoretical and statistical tools to improve the efficiency of both model fitting and model selection. A first question that often arises when recording synaptic currents is how to precisely observe and measure a quantal transmission. As mentioned above, synaptic transmission has been observed to be quantised. Indeed, the opening of a single presynaptic vesicle (and the release of the neurotransmitters it contains) will create a stereotypical postsynaptic current q, which is called the quantal amplitude. As the number of activated presynaptic vesicles increases, the total postsynaptic current will increase in step-like increments of amplitude q. Hence, at chemical synapses, the postsynaptic responses to presynaptic stimulations are built up of k quanta of current, where k is a random variable corresponding to the number of open vesicles. Excitatory postsynaptic current (EPSC) thus follows a multimodal distribution, where each component has its mean located to a multiple kq with k 2 N and has a width corresponding to the recording noise σ. If σ is large with respect to q, these components will fuse into a unimodal distribution, impeding the possibility to identify quantal transmission and to compute q. How to characterize the regime of parameters in which quantal transmission can be identified? This question led us to define a practical identifiability criterion for statistical model, which is presented in Chapter 4. In doing so, we also derive a mean-field approach for fast likelihood computation (Appendix A) and discuss the possibility to use the Bayesian Information Criterion (a classically used model selection criterion) with correlated observations (Appendix B). A second question that is especially relevant for experimentalists is how to optimally stimulate the presynaptic cell in order to maximize the informativeness of the recordings. The parameters of a chemical synapse (namely, the number of presynaptic vesicles N, their release probability p, the quantal amplitude q, the short-term depression time constant τD, etc.) cannot be measured directly, but can be estimated from the synapse’s postsynaptic responses to evoked stimuli. However, these estimates critically depend on the stimulation protocol being used. For instance, if inter-spike intervals are too large, no short-term plasticity will appear in the recordings; conversely, a too high stimulation frequency will lead to a depletion of the presynaptic vesicles and to a poor informativeness of the postsynaptic currents. How to perform Optimal Experiment Design (OED) for synaptic characterization? We developed an Efficient Sampling-Based Bayesian Active Learning (ESB-BAL) framework, which is efficient enough to be used in real-time biological experiments (Chapter 5), and propose a link between our proposed definition of practical identifiability and Optimal Experiment Design for model selection (Chapter 6). Finally, a third biological question to which we ought to bring a theoretical answer is how to make sense of the observed organization of synaptic proteins. Microscopy observations have shown that presynaptic release sites and postsynaptic receptors are organized in ring-like patterns, which are disrupted upon genetic mutations. In Chapter 7, we propose a normative approach to this protein organization, and suggest that it might optimize a certain biological cost function (e.g. the mean current or SNR after vesicle release). The different theoretical tools and methods developed in this thesis are general enough to be applicable not only to synaptic characterization, but also to different experimental settings and systems studied in physiology. Overall, we expect to democratize and simplify the use of quantitative and normative approaches in biology, thus reducing the cost of experimentation in physiology, and paving the way to more systematic and automated experimental designs

    Stochastic variational learning in recurrent spiking networks

    Get PDF
    The ability to learn and perform statistical inference with biologically plausible recurrent networks of spiking neurons is an important step toward understanding perception and reasoning. Here we derive and investigate a new learning rule for recurrent spiking networks with hidden neurons, combining principles from variational learning and reinforcement learning. Our network defines a generative model over spike train histories and the derived learning rule has the form of a local Spike Timing Dependent Plasticity rule modulated by global factors (neuromodulators) conveying information about "novelty" on a statistically rigorous ground. Simulations show that our model is able to learn both stationary and non-stationary patterns of spike trains. We also propose one experiment that could potentially be performed with animals in order to test the dynamics of the predicted novelty signal

    Prediction error dependent changes in brain connectivity during associative learning

    Get PDF
    One of the fundaments of associative learning theories is that surprising events drive learning by signalling the need to update one’s beliefs. It has long been suggested that plasticity of connection strengths between neurons underlies the learning of predictive associations: Neural units encoding associated entities change their connectivity to encode the learned associative strength. Surprisingly, previous imaging studies have focused on correlations between regional brain activity and variables of learning models, but neglected how these variables changes in interregional connectivity. Dynamic Causal Models (DCMs) of neuronal populations and their effective connectivity form a novel technique to investigate such learning dependent changes in connection strengths. In the work presented here, I embedded computational learning models into DCMs to investigate how computational processes are reflected by changes in connectivity. These novel models were then used to explain fMRI data from three associative learning studies. The first study integrated a Rescorla-Wagner model into a DCM using an incidental learning paradigm where auditory cues predicted the presence/absence of visual stimuli. Results showed that even for behaviourally irrelevant probabilistic associations, prediction errors drove the consolidation of connection strengths between the auditory and visual areas. In the second study I combined a Bayesian observer model and a nonlinear DCM, using an fMRI paradigm where auditory cues differentially predicted visual stimuli, to investigate how predictions about sensory stimuli influence motor responses. Here, the degree of striatal prediction error activity controlled the plasticity of visuo-motor connections. In a third study, I used a nonlinear DCM and data from a fear learning study to demonstrate that prediction error activity in the amygdala exerts a modulatory influence on visuo-striatal connections. Though postulated by many models and theories about learning, to our knowledge the work presented in this thesis constitutes the first direct report that prediction errors can modulate connection strength

    神経回路網における無意識的推論 : 電気生理と学習理論

    Get PDF
    学位の種別: 課程博士審査委員会委員 : (主査)東京大学教授 神保 泰彦, 東京大学教授 鳥居 徹, 理化学研究所客員教授 深井 朋樹, 東京大学准教授 小谷 潔, 東京大学講師 高橋 宏知University of Tokyo(東京大学
    corecore