1,809 research outputs found

    Transient cognitive dynamics, metastability, and decision making

    Get PDF
    Transient Cognitive Dynamics, Metastability, and Decision Making. Rabinovich et al. PLoS Computational Biology. 2008. 4(5) doi:10.1371/journal.pcbi.1000072The idea that cognitive activity can be understood using nonlinear dynamics has been intensively discussed at length for the last 15 years. One of the popular points of view is that metastable states play a key role in the execution of cognitive functions. Experimental and modeling studies suggest that most of these functions are the result of transient activity of large-scale brain networks in the presence of noise. Such transients may consist of a sequential switching between different metastable cognitive states. The main problem faced when using dynamical theory to describe transient cognitive processes is the fundamental contradiction between reproducibility and flexibility of transient behavior. In this paper, we propose a theoretical description of transient cognitive dynamics based on the interaction of functionally dependent metastable cognitive states. The mathematical image of such transient activity is a stable heteroclinic channel, i.e., a set of trajectories in the vicinity of a heteroclinic skeleton that consists of saddles and unstable separatrices that connect their surroundings. We suggest a basic mathematical model, a strongly dissipative dynamical system, and formulate the conditions for the robustness and reproducibility of cognitive transients that satisfy the competing requirements for stability and flexibility. Based on this approach, we describe here an effective solution for the problem of sequential decision making, represented as a fixed time game: a player takes sequential actions in a changing noisy environment so as to maximize a cumulative reward. As we predict and verify in computer simulations, noise plays an important role in optimizing the gain.This work was supported by ONR N00014-07-1-0741. PV acknowledges support from Spanish BFU2006-07902/BFI and CAM S-SEM-0255-2006

    Dynamics of biologically informed neural mass models of the brain

    Get PDF
    This book contributes to the development and analysis of computational models that help brain function to be understood. The mean activity of a brain area is mathematically modeled in such a way as to strike a balance between tractability and biological plausibility. Neural mass models (NMM) are used to describe switching between qualitatively different regimes (such as those due to pharmacological interventions, epilepsy, sleep, or context-induced state changes), and to explain resonance phenomena in a photic driving experiment. The description of varying states in an ordered sequence gives a principle scheme for the modeling of complex phenomena on multiple time scales. The NMM is matched to the photic driving experiment routinely applied in the diagnosis of such diseases as epilepsy, migraine, schizophrenia and depression. The model reproduces the clinically relevant entrainment effect and predictions are made for improving the experimental setting.Die vorliegende Arbeit stellt einen Beitrag zur Entwicklung und Analyse von Computermodellen zum Verständnis von Hirnfunktionen dar. Es wird die mittlere Aktivität eines Hirnareals analytisch einfach und dabei biologisch plausibel modelliert. Auf Grundlage eines Neuronalen Massenmodells (NMM) werden die Wechsel zwischen Oszillationsregimen (z.B. durch pharmakologisch, epilepsie-, schlaf- oder kontextbedingte Zustandsänderungen) als geordnete Folge beschrieben und Resonanzphänomene in einem Photic-Driving-Experiment erklärt. Dieses NMM kann sehr komplexe Dynamiken (z.B. Chaos) innerhalb biologisch plausibler Parameterbereiche hervorbringen. Um das Verhalten abzuschätzen, wird das NMM als Funktion konstanter Eingangsgrößen und charakteristischer Zeitenkonstanten vollständig auf Bifurkationen untersucht und klassifiziert. Dies ermöglicht die Beschreibung wechselnder Regime als geordnete Folge durch spezifische Eingangstrajektorien. Es wird ein Prinzip vorgestellt, um komplexe Phänomene durch Prozesse verschiedener Zeitskalen darzustellen. Da aufgrund rhythmischer Stimuli und der intrinsischen Rhythmen von Neuronenverbänden die Eingangsgrößen häufig periodisch sind, wird das Verhalten des NMM als Funktion der Intensität und Frequenz einer periodischen Stimulation mittels der zugehörigen Lyapunov-Spektren und der Zeitreihen charakterisiert. Auf der Basis der größten Lyapunov-Exponenten wird das NMM mit dem Photic-Driving-Experiment überein gebracht. Dieses Experiment findet routinemäßige Anwendung in der Diagnostik verschiedener Erkrankungen wie Epilepsie, Migräne, Schizophrenie und Depression. Durch die Anwendung des vorgestellten NMM wird der für die Diagnostik entscheidende Mitnahmeeffekt reproduziert und es werden Vorhersagen für eine Verbesserung der Indikation getroffen

    Structure Learning in Coupled Dynamical Systems and Dynamic Causal Modelling

    Get PDF
    Identifying a coupled dynamical system out of many plausible candidates, each of which could serve as the underlying generator of some observed measurements, is a profoundly ill posed problem that commonly arises when modelling real world phenomena. In this review, we detail a set of statistical procedures for inferring the structure of nonlinear coupled dynamical systems (structure learning), which has proved useful in neuroscience research. A key focus here is the comparison of competing models of (ie, hypotheses about) network architectures and implicit coupling functions in terms of their Bayesian model evidence. These methods are collectively referred to as dynamical casual modelling (DCM). We focus on a relatively new approach that is proving remarkably useful; namely, Bayesian model reduction (BMR), which enables rapid evaluation and comparison of models that differ in their network architecture. We illustrate the usefulness of these techniques through modelling neurovascular coupling (cellular pathways linking neuronal and vascular systems), whose function is an active focus of research in neurobiology and the imaging of coupled neuronal systems

    Robust Transient Dynamics and Brain Functions

    Get PDF
    In the last few decades several concepts of dynamical systems theory (DST) have guided psychologists, cognitive scientists, and neuroscientists to rethink about sensory motor behavior and embodied cognition. A critical step in the progress of DST application to the brain (supported by modern methods of brain imaging and multi-electrode recording techniques) has been the transfer of its initial success in motor behavior to mental function, i.e., perception, emotion, and cognition. Open questions from research in genetics, ecology, brain sciences, etc., have changed DST itself and lead to the discovery of a new dynamical phenomenon, i.e., reproducible and robust transients that are at the same time sensitive to informational signals. The goal of this review is to describe a new mathematical framework – heteroclinic sequential dynamics – to understand self-organized activity in the brain that can explain certain aspects of robust itinerant behavior. Specifically, we discuss a hierarchy of coarse-grain models of mental dynamics in the form of kinetic equations of modes. These modes compete for resources at three levels: (i) within the same modality, (ii) among different modalities from the same family (like perception), and (iii) among modalities from different families (like emotion and cognition). The analysis of the conditions for robustness, i.e., the structural stability of transient (sequential) dynamics, give us the possibility to explain phenomena like the finite capacity of our sequential working memory – a vital cognitive function –, and to find specific dynamical signatures – different kinds of instabilities – of several brain functions and mental diseases

    Dynamic models of brain imaging data and their Bayesian inversion

    Get PDF
    This work is about understanding the dynamics of neuronal systems, in particular with respect to brain connectivity. It addresses complex neuronal systems by looking at neuronal interactions and their causal relations. These systems are characterized using a generic approach to dynamical system analysis of brain signals - dynamic causal modelling (DCM). DCM is a technique for inferring directed connectivity among brain regions, which distinguishes between a neuronal and an observation level. DCM is a natural extension of the convolution models used in the standard analysis of neuroimaging data. This thesis develops biologically constrained and plausible models, informed by anatomic and physiological principles. Within this framework, it uses mathematical formalisms of neural mass, mean-field and ensemble dynamic causal models as generative models for observed neuronal activity. These models allow for the evaluation of intrinsic neuronal connections and high-order statistics of neuronal states, using Bayesian estimation and inference. Critically it employs Bayesian model selection (BMS) to discover the best among several equally plausible models. In the first part of this thesis, a two-state DCM for functional magnetic resonance imaging (fMRI) is described, where each region can model selective changes in both extrinsic and intrinsic connectivity. The second part is concerned with how the sigmoid activation function of neural-mass models (NMM) can be understood in terms of the variance or dispersion of neuronal states. The third part presents a mean-field model (MFM) for neuronal dynamics as observed with magneto- and electroencephalographic data (M/EEG). In the final part, the MFM is used as a generative model in a DCM for M/EEG and compared to the NMM using Bayesian model selection

    The Dynamic Brain: From Spiking Neurons to Neural Masses and Cortical Fields

    Get PDF
    The cortex is a complex system, characterized by its dynamics and architecture, which underlie many functions such as action, perception, learning, language, and cognition. Its structural architecture has been studied for more than a hundred years; however, its dynamics have been addressed much less thoroughly. In this paper, we review and integrate, in a unifying framework, a variety of computational approaches that have been used to characterize the dynamics of the cortex, as evidenced at different levels of measurement. Computational models at different space–time scales help us understand the fundamental mechanisms that underpin neural processes and relate these processes to neuroscience data. Modeling at the single neuron level is necessary because this is the level at which information is exchanged between the computing elements of the brain; the neurons. Mesoscopic models tell us how neural elements interact to yield emergent behavior at the level of microcolumns and cortical columns. Macroscopic models can inform us about whole brain dynamics and interactions between large-scale neural systems such as cortical regions, the thalamus, and brain stem. Each level of description relates uniquely to neuroscience data, from single-unit recordings, through local field potentials to functional magnetic resonance imaging (fMRI), electroencephalogram (EEG), and magnetoencephalogram (MEG). Models of the cortex can establish which types of large-scale neuronal networks can perform computations and characterize their emergent properties. Mean-field and related formulations of dynamics also play an essential and complementary role as forward models that can be inverted given empirical data. This makes dynamic models critical in integrating theory and experiments. We argue that elaborating principled and informed models is a prerequisite for grounding empirical neuroscience in a cogent theoretical framework, commensurate with the achievements in the physical sciences

    Can we identify non-stationary dynamics of trial-to-trial variability?"

    Get PDF
    Identifying sources of the apparent variability in non-stationary scenarios is a fundamental problem in many biological data analysis settings. For instance, neurophysiological responses to the same task often vary from each repetition of the same experiment (trial) to the next. The origin and functional role of this observed variability is one of the fundamental questions in neuroscience. The nature of such trial-to-trial dynamics however remains largely elusive to current data analysis approaches. A range of strategies have been proposed in modalities such as electro-encephalography but gaining a fundamental insight into latent sources of trial-to-trial variability in neural recordings is still a major challenge. In this paper, we present a proof-of-concept study to the analysis of trial-to-trial variability dynamics founded on non-autonomous dynamical systems. At this initial stage, we evaluate the capacity of a simple statistic based on the behaviour of trajectories in classification settings, the trajectory coherence, in order to identify trial-to-trial dynamics. First, we derive the conditions leading to observable changes in datasets generated by a compact dynamical system (the Duffing equation). This canonical system plays the role of a ubiquitous model of non-stationary supervised classification problems. Second, we estimate the coherence of class-trajectories in empirically reconstructed space of system states. We show how this analysis can discern variations attributable to non-autonomous deterministic processes from stochastic fluctuations. The analyses are benchmarked using simulated and two different real datasets which have been shown to exhibit attractor dynamics. As an illustrative example, we focused on the analysis of the rat's frontal cortex ensemble dynamics during a decision-making task. Results suggest that, in line with recent hypotheses, rather than internal noise, it is the deterministic trend which most likely underlies the observed trial-to-trial variability. Thus, the empirical tool developed within this study potentially allows us to infer the source of variability in in-vivo neural recordings

    Genetic determination and layout rules of visual cortical architecture

    Get PDF
    The functional architecture of the primary visual cortex is set up by neurons that preferentially respond to visual stimuli with contours of a specific orientation in visual space. In primates and placental carnivores, orientation preference is arranged into continuous and roughly repetitive (iso-) orientation domains. Exceptions are pinwheels that are surrounded by all orientation preferences. The configuration of pinwheels adheres to quantitative species-invariant statistics, the common design. This common design most likely evolved independently at least twice in the course of the past 65 million years, which might indicate a functionally advantageous trait. The possible acquisition of environment-dependent functional traits by genes, the Baldwin effect, makes it conceivable that visual cortical architecture is partially or redundantly encoded by genetic information. In this conception, genetic mechanisms support the emergence of visual cortical architecture or even establish it under unfavorable environments. In this dissertation, I examine the capability of genetic mechanisms for encoding visual cortical architecture and mathematically dissect the pinwheel configuration under measurement noise as well as in different geometries. First, I theoretically explore possible roles of genetic mechanisms in visual cortical development that were previously excluded from theoretical research, mostly because the information capacity of the genome appeared too small to contain a blueprint for wiring up the cortex. For the first time, I provide a biologically plausible scheme for quantitatively encoding functional visual cortical architecture by genetic information that circumvents the alleged information bottleneck. Key ingredients for this mechanism are active transport and trans-neuronal signaling as well as joined dynamics of morphogens and connectome. This theory provides predictions for experimental tests and thus may help to clarify the relative importance of genes and environments on complex human traits. Second, I disentangle the link between orientation domain ensembles and the species-invariant pinwheel statistics of the common design. This examination highlights informative measures of pinwheel configurations for model benchmarking. Third, I mathematically investigate the susceptibility of the pinwheel configuration to measurement noise. The results give rise to an extrapolation method of pinwheel densities to the zero noise limit and provide an approximated analytical expression for confidence regions of pinwheel centers. Thus, the work facilitates high-precision measurements and enhances benchmarking for devising more accurate models of visual cortical development. Finally, I shed light on genuine three-dimensional properties of functional visual cortical architectures. I devise maximum entropy models of three-dimensional functional visual cortical architectures in different geometries. This theory enables the examination of possible evolutionary transitions between different functional architectures for which intermediate organizations might still exist
    • …
    corecore