583 research outputs found

    Dynamical principles in neuroscience

    Full text link
    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and Fundación BBVA

    Mecanismos de codificación y procesamiento de información en redes basadas en firmas neuronales

    Full text link
    Tesis doctoral inédita leída en la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Tecnología Electrónica y de las Comunicaciones. Fecha de lectura: 21-02-202

    Modeling biophysical and neural circuit bases for core cognitive abilities evident in neuroimaging patterns: hippocampal mismatch, mismatch negativity, repetition positivity, and alpha suppression of distractors

    Get PDF
    This dissertation develops computational models to address outstanding problems in the domain of expectation-related cognitive processes and their neuroimaging markers in functional MRI or EEG. The new models reveal a way to unite diverse phenomena within a common framework focused on dynamic neural encoding shifts, which can arise from robust interactive effects of M-currents and chloride currents in pyramidal neurons. By specifying efficient, biologically realistic circuits that achieve predictive coding (e.g., Friston, 2005), these models bridge among neuronal biophysics, systems neuroscience, and theories of cognition. Chapter one surveys data types and neural processes to be examined, and outlines the Dynamically Labeled Predictive Coding (DLPC) framework developed during the research. Chapter two models hippocampal prediction and mismatch, using the DLPC framework. Chapter three presents extensions to the model that allow its application for modeling neocortical EEG genesis. Simulations of this extended model illustrate how dynamic encoding shifts can produce Mismatch Negativity (MMN) phenomena, including pharmacological effects on MMN reported for humans or animals. Chapters four and five describe new modeling studies of possible neural bases for alpha-induced information suppression, a phenomenon associated with active ignoring of stimuli. Two models explore the hypothesis that in simple rate-based circuits, information suppression might be a robust effect of neural saturation states arising near peaks of resonant alpha oscillations. A new proposal is also introduced for how the basal ganglia may control onset and offset of alpha-induced information suppression. Although these rate models could reproduce many experimental findings, they fell short of reproducing a key electrophysiological finding: phase-dependent reduction in spiking activity correlated with power in the alpha frequency band. Therefore, chapter five also specifies how a DLPC model, adapted from the neocortical model developed in chapter three, can provide an expectation-based model of alpha-induced information suppression that exhibits phase-dependent spike reduction during alpha-band oscillations. The model thus can explain experimental findings that were not reproduced by the rate models. The final chapter summarizes main theses, results, and basic research implications, then suggests future directions, including expanded models of neocortical mismatch, applications to artificial neural networks, and the introduction of reward circuitry

    Functional Brain Oscillations: How Oscillations Facilitate Information Representation and Code Memories

    Get PDF
    The overall aim of the modelling works within this thesis is to lend theoretical evidence to empirical findings from the brain oscillations literature. We therefore hope to solidify and expand the notion that precise spike timing through oscillatory mechanisms facilitates communication, learning, information processing and information representation within the brain. The primary hypothesis of this thesis is that it can be shown computationally that neural de-synchronisations can allow information content to emerge. We do this using two neural network models, the first of which shows how differential rates of neuronal firing can indicate when a single item is being actively represented. The second model expands this notion by creating a complimentary timing mechanism, thus enabling the emergence of qualitive temporal information when a pattern of items is being actively represented. The secondary hypothesis of this thesis is that it can be also be shown computationally that oscillations might play a functional role in learning. Both of the models presented within this thesis propose a sparsely coded and fast learning hippocampal region that engages in the binding of novel episodic information. The first model demonstrates how active cortical representations enable learning to occur in their hippocampal counterparts via a phase-dependent learning rule. The second model expands this notion, creating hierarchical temporal sequences to encode the relative temporal position of cortical representations. We demonstrate in both of these models, how cortical brain oscillations might provide a gating function to the representation of information, whilst complimentary hippocampal oscillations might provide distinct phasic reference points for learning

    27th Annual Computational Neuroscience Meeting (CNS*2018): Part One

    Get PDF

    Exploration of Neural Structures for Dynamic System Control

    Get PDF
    Biological neural systems are powerful mechanisms for controlling biological sys- tems. While the complexity of biological neural networks makes exact simulation intractable, several key aspects lend themselves to implementation on computational systems. This thesis constructs a discrete event neural network simulation that implements aspects of biological neural networks. A combined genetic programming/simulated annealing approach is utilized to design network structures that function as regulators for continuous time dynamic systems in the presence of process noise when simulated using a discrete event neural simulation. Methods of constructing such networks are analyzed including examination of the final network structure and the algorithm used to construct the networks. The parameters of the network simulation are also analyzed, as well as the interface between the network and the dynamic system. This analysis provides insight to the construction of networks for more complicated control applications
    corecore