35 research outputs found

    Neural decoding with visual attention using sequential Monte Carlo for leaky integrate-and-fire neurons

    Get PDF
    How the brain makes sense of a complicated environment is an important question, and a first step is to be able to reconstruct the stimulus that give rise to an observed brain response. Neural coding relates neurobiological observations to external stimuli using computational methods. Encoding refers to how a stimulus affects the neuronal output, and entails constructing a neural model and parameter estimation. Decoding refers to reconstruction of the stimulus that led to a given neuronal output. Existing decoding methods rarely explain neuronal responses to complicated stimuli in a principled way. Here we perform neural decoding for a mixture of multiple stimuli using the leaky integrate-and-fire model describing neural spike trains, under the visual attention hypothesis of probability mixing in which the neuron only attends to a single stimulus at any given time. We assume either a parallel or serial processing visual search mechanism when decoding multiple simultaneous neurons. We consider one or multiple stochastic stimuli following Ornstein-Uhlenbeck processes, and dynamic neuronal attention that switches following discrete Markov processes. To decode stimuli in such situations, we develop various sequential Monte Carlo particle methods in different settings. The likelihood of the observed spike trains is obtained through the first-passage time probabilities obtained by solving the Fokker-Planck equations. We show that the stochastic stimuli can be successfully decoded by sequential Monte Carlo, and different particle methods perform differently considering the number of observed spike trains, the number of stimuli, model complexity, etc. The proposed novel decoding methods, which analyze the neural data through psychological visual attention theories, provide new perspectives to understand the brain

    Dynamic models of brain imaging data and their Bayesian inversion

    Get PDF
    This work is about understanding the dynamics of neuronal systems, in particular with respect to brain connectivity. It addresses complex neuronal systems by looking at neuronal interactions and their causal relations. These systems are characterized using a generic approach to dynamical system analysis of brain signals - dynamic causal modelling (DCM). DCM is a technique for inferring directed connectivity among brain regions, which distinguishes between a neuronal and an observation level. DCM is a natural extension of the convolution models used in the standard analysis of neuroimaging data. This thesis develops biologically constrained and plausible models, informed by anatomic and physiological principles. Within this framework, it uses mathematical formalisms of neural mass, mean-field and ensemble dynamic causal models as generative models for observed neuronal activity. These models allow for the evaluation of intrinsic neuronal connections and high-order statistics of neuronal states, using Bayesian estimation and inference. Critically it employs Bayesian model selection (BMS) to discover the best among several equally plausible models. In the first part of this thesis, a two-state DCM for functional magnetic resonance imaging (fMRI) is described, where each region can model selective changes in both extrinsic and intrinsic connectivity. The second part is concerned with how the sigmoid activation function of neural-mass models (NMM) can be understood in terms of the variance or dispersion of neuronal states. The third part presents a mean-field model (MFM) for neuronal dynamics as observed with magneto- and electroencephalographic data (M/EEG). In the final part, the MFM is used as a generative model in a DCM for M/EEG and compared to the NMM using Bayesian model selection

    Hemodynamic Deconvolution Demystified: Sparsity-Driven Regularization at Work

    Full text link
    Deconvolution of the hemodynamic response is an important step to access short timescales of brain activity recorded by functional magnetic resonance imaging (fMRI). Albeit conventional deconvolution algorithms have been around for a long time (e.g., Wiener deconvolution), recent state-of-the-art methods based on sparsity-pursuing regularization are attracting increasing interest to investigate brain dynamics and connectivity with fMRI. This technical note revisits the main concepts underlying two main methods, Paradigm Free Mapping and Total Activation, in the most accessible way. Despite their apparent differences in the formulation, these methods are theoretically equivalent as they represent the synthesis and analysis sides of the same problem, respectively. We demonstrate this equivalence in practice with their best-available implementations using both simulations, with different signal-to-noise ratios, and experimental fMRI data acquired during a motor task and resting-state. We evaluate the parameter settings that lead to equivalent results, and showcase the potential of these algorithms compared to other common approaches. This note is useful for practitioners interested in gaining a better understanding of state-of-the-art hemodynamic deconvolution, and aims to answer questions that practitioners often have regarding the differences between the two methods.Comment: 18 pages, 6 figures, submitted to Apertur

    HIERARCHICAL NEURAL COMPUTATION IN THE MAMMALIAN VISUAL SYSTEM

    Get PDF
    Our visual system can efficiently extract behaviorally relevant information from ambiguous and noisy luminance patterns. Although we know much about the anatomy and physiology of the visual system, it remains obscure how the computation performed by individual visual neurons is constructed from the neural circuits. In this thesis, I designed novel statistical modeling approaches to study hierarchical neural computation, using electrophysiological recordings from several stages of the mammalian visual system. In Chapter 2, I describe a two-stage nonlinear model that characterized both synaptic current and spike response of retinal ganglion cells with unprecedented accuracy. I found that excitatory synaptic currents to ganglion cells are well described by excitatory inputs multiplied by divisive suppression, and that spike responses can be explained with the addition of a second stage of spiking nonlinearity and refractoriness. The structure of the model was inspired by known elements of the retinal circuit, and implies that presynaptic inhibition from amacrine cells is an important mechanism underlying ganglion cell computation. In Chapter 3, I describe a hierarchical stimulus-processing model of MT neurons in the context of a naturalistic optic flow stimulus. The model incorporates relevant nonlinear properties of upstream V1 processing and explained MT neuron responses to complex motion stimuli. MT neuron responses are shown to be best predicted from distinct excitatory and suppressive components. The direction-selective suppression can impart selectivity of MT neurons to complex velocity fields, and contribute to improved estimation of the three-dimensional velocity of moving objects. In Chapter 4, I present an extended model of MT neurons that includes both the stimulus-processing component and network activity reflected in local field potentials (LFPs). A significant fraction of the trial-to-trial variability of MT neuron responses is predictable from the LFPs in both passive fixation and a motion discrimination task. Moreover, the choice-related variability of MT neuron responses can be explained by their phase preferences in low-frequency band LFPs. These results suggest an important role of network activity in cortical function. Together, these results demonstrated that it is possible to infer the nature of neural computation from physiological recordings using statistical modeling approaches

    Sample Path Analysis of Integrate-and-Fire Neurons

    Get PDF
    Computational neuroscience is concerned with answering two intertwined questions that are based on the assumption that spatio-temporal patterns of spikes form the universal language of the nervous system. First, what function does a specific neural circuitry perform in the elaboration of a behavior? Second, how do neural circuits process behaviorally-relevant information? Non-linear system analysis has proven instrumental in understanding the coding strategies of early neural processing in various sensory modalities. Yet, at higher levels of integration, it fails to help in deciphering the response of assemblies of neurons to complex naturalistic stimuli. If neural activity can be assumed to be primarily driven by the stimulus at early stages of processing, the intrinsic activity of neural circuits interacts with their high-dimensional input to transform it in a stochastic non-linear fashion at the cortical level. As a consequence, any attempt to fully understand the brain through a system analysis approach becomes illusory. However, it is increasingly advocated that neural noise plays a constructive role in neural processing, facilitating information transmission. This prompts to gain insight into the neural code by studying the stochasticity of neuronal activity, which is viewed as biologically relevant. Such an endeavor requires the design of guiding theoretical principles to assess the potential benefits of neural noise. In this context, meeting the requirements of biological relevance and computational tractability, while providing a stochastic description of neural activity, prescribes the adoption of the integrate-and-fire model. In this thesis, founding ourselves on the path-wise description of neuronal activity, we propose to further the stochastic analysis of the integrate-and fire model through a combination of numerical and theoretical techniques. To begin, we expand upon the path-wise construction of linear diffusions, which offers a natural setting to describe leaky integrate-and-fire neurons, as inhomogeneous Markov chains. Based on the theoretical analysis of the first-passage problem, we then explore the interplay between the internal neuronal noise and the statistics of injected perturbations at the single unit level, and examine its implications on the neural coding. At the population level, we also develop an exact event-driven implementation of a Markov network of perfect integrate-and-fire neurons with both time delayed instantaneous interactions and arbitrary topology. We hope our approach will provide new paradigms to understand how sensory inputs perturb neural intrinsic activity and accomplish the goal of developing a new technique for identifying relevant patterns of population activity. From a perturbative perspective, our study shows how injecting frozen noise in different flavors can help characterize internal neuronal noise, which is presumably functionally relevant to information processing. From a simulation perspective, our event-driven framework is amenable to scrutinize the stochastic behavior of simple recurrent motifs as well as temporal dynamics of large scale networks under spike-timing-dependent plasticity

    Bayesian Parametric Receptive-Field Identification from Sparse or Noisy Data

    Get PDF
    Die Charakterisierung der Stimulusselektivität sensorischer Neuronen ist ein wichtiger Schritt zum Begreifen, wie die Information über die Umgebung im Gehirn dargestellt werden. Aufgrund der probabilistischen Natur der Beziehung zwischen externen Stimuli und neuronalen Reaktionen und der hohen Dimensionalität des Raums der natürlichen Stimuli ist dies jedoch eine besonders rechenintensive Aufgabe. Moderne, auf empirischen Bayes-Modellen basierte Methoden zur Identifizierung des rezeptiven Feldes skalieren alles andere als optimal für ein hochdimensionale Raum. Rechnerisch effiziente Implementierungen beruhen sich aber auf strengen Annahmen über den Spike-Generierungsprozess. Zudem liefern diese Modelle keine prinzipiell Glaubwürdigkeitintervalle für experimentell relevante Parameter, so dass es schwierig ist, die Messunsicherheit für Hypothesentests in Situationen mit spärlichen oder verrauschten Daten zu propagieren. Hier stellen wir einen vollständigen Bayes'schen Ansatz zur Identifizierung rezeptiver Felder bei spärlichen Daten vor, der auch eine prinzipielle Quantifizierung der Messunsicherheit ermöglicht. Wir profitieren davon, dass für viele sensorische Bereiche kanonische Modelle bereits existieren, die erklären, wie Neuronen ihre Eingangssignale in Feuerungsraten kodieren. Diese Modelle bestehen in der Regel aus wenigen, interpretierbaren Parametern und können den Raum der mit den Daten kompatiblen rezeptiven Felder einschränken. Obwohl solche Modelle möglicherweise nicht alle Nuancen eines bestimmten rezeptiven Feldes erfassen, können sie eine schnelle Charakterisierung der Kodierungseigenschaften eines Neurons ermöglichen. Wir führen eine Bayes'sche Inferenz direkt auf diesen Modellparametern durch und zeigen, dass wir das Vorhandensein eines rezeptiven Feldes mit einigen Zehntel der gemessenen Spikes unter physiologischen Bedingungen erkennen können. Darüber hinaus untersuchen wir, wie unterschiedliche Datenmengen die Modellparameter einschränken. Wir zeigen auch, wie ein vollständiger Bayes'scher Ansatz verwendet werden kann, um konkurrierende Hypothesen zu testen und einen Datensatz realer, spärlich gemessener Neuronen zu charakterisieren. In dieser Arbeit liegt unser Schwerpunkt zwar auf der Modellierung von Neuronen in visuellen kortikalen Bereichen, aber unser flexibler Ansatz hat das Potenzial, auf Neuronen in anderen Hirnbereichen mit unterschiedlichen Input-Output-Eigenschaften generalisiert zu werden.Characterizing the stimulus selectivity of sensory neurons is an important step towards understanding how information about the world is represented in the brain. However, this is a computationally challenging task, in particular due to the probabilistic nature of the relationship between external stimuli and neural responses and the high dimensionality of the space of natural stimuli. State-of-the-art receptive field identification methods based on empirical Bayes scale poorly to high-dimensional settings, and computationally efficient implementations rely on stringent assumptions about the spike generation process. Furthermore, these models fail to provide principled credible intervals for experimentally relevant parameters making it hard to propagate uncertainty for hypothesis testing in regimes of sparse or noisy data. Here, we present a full Bayesian approach to identify receptive fields in sparse data regimes, which provides also a principled quantification of the estimation uncertainty. We take advantage of the fact that, for many sensory areas, there are canonical models that explain how neurons encode their inputs into firing rates. These models usually rely on few, interpretable parameters and can be used to constrain the space of receptive fields that can explain the data. While such models may not be flexible enough to capture all nuances of a particular receptive field, they can be effective for obtaining a fast characterization of the encoding properties of a neuron. We perform Bayesian inference directly on these model parameters and we show that we can detect the presence of a receptive field with a few tenths of measured spikes in physiological conditions. Furthermore, we investigate how different amounts of data constrain the model parameters and we illustrate how a full Bayesian approach can be used to test competing hypotheses and characterize a dataset of real, sparsely-sampled neurons. In this work, our focus is directed in modeling neurons in visual cortical areas, but our flexible approach has the potential to be generalized to neurons in other brain areas, with different input-output properties

    Statistical approaches for resting state fMRI data analysis

    Get PDF
    This doctoral dissertation investigates the methodology to explore brain dynamics from resting state fMRI data. A standard resting state fMRI study gives rise to massive amounts of noisy data with a complicated spatio-temporal correlation structure. There are two main objectives in the analysis of these noisy data: establishing the link between neural activity and the measured signal, and determining distributed brain networks that correspond to brain function. These measures can then be used as indicators of psychological, cognitive or pathological states. Two main issues will be addressed: retrieving and interpreting the hemodynamic response function (HRF) at rest, and dealing with the redundancy inherent to fMRI data. Novel approaches are introduced, discussed and validated on simulated data and on real datasets, in health and disease, in order to track modulation of brain dynamics and HRF across different pathophysiological conditions

    29th Annual Computational Neuroscience Meeting: CNS*2020

    Get PDF
    Meeting abstracts This publication was funded by OCNS. The Supplement Editors declare that they have no competing interests. Virtual | 18-22 July 202

    Modelling human choices: MADeM and decision‑making

    Get PDF
    Research supported by FAPESP 2015/50122-0 and DFG-GRTK 1740/2. RP and AR are also part of the Research, Innovation and Dissemination Center for Neuromathematics FAPESP grant (2013/07699-0). RP is supported by a FAPESP scholarship (2013/25667-8). ACR is partially supported by a CNPq fellowship (grant 306251/2014-0)
    corecore