1,158 research outputs found

    Information processing in a midbrain visual pathway

    Get PDF
    Visual information is processed in brain via the intricate interactions between neurons. We investigated a midbrain visual pathway: optic tectum and its isthmic nucleus) that is motion sensitive and is thought as part of attentional system. We determined the physiological properties of individual neurons as well as their synaptic connections with intracellular recordings. We reproduced the center-surround receptive field structure of tectal neurons in a dynamical recurrent feedback loop. We reveal in a computational model that the anti-topographic inhibitory feedback could mediate competitive stimulus selection in a complex visual scene. We also investigated the dynamics of the competitive selection in a rate model. The isthmotectal feedback loop gates the information transfer from tectum to thalamic rotundus. We discussed the role of a localized feedback projection in contributing to the gating mechanisms with both experimental and numerical approaches. We further discussed the dynamics of the isthmotectal system by considering the propagation delays between different components. We conclude that the isthmotectal system is involved in attention-like competitive stimulus selection and control the information coding in the motion sensitive SGC-I neurons by modulating the retino-tectal synaptic transmission

    Dynamic Control of Network Level Information Processing through Cholinergic Modulation

    Full text link
    Acetylcholine (ACh) release is a prominent neurochemical marker of arousal state within the brain. Changes in ACh are associated with changes in neural activity and information processing, though its exact role and the mechanisms through which it acts are unknown. Here I show that the dynamic changes in ACh levels that are associated with arousal state control informational processing functions of networks through its effects on the degree of Spike-Frequency Adaptation (SFA), an activity dependent decrease in excitability, synchronizability, and neuronal resonance displayed by single cells. Using numerical modeling I develop mechanistic explanations for how control of these properties shift network activity from a stable high frequency spiking pattern to a traveling wave of activity. This transition mimics the change in brain dynamics seen between high ACh states, such as waking and Rapid Eye Movement (REM) sleep, and low ACh states such as Non-REM (NREM) sleep. A corresponding, and related, transition in network level memory recall is also occurs as ACh modulates neuronal SFA. When ACh is at its highest levels (waking) all memories are stably recalled, as ACh is decreased (REM) in the model weakly encoded memories destabilize while strong memories remain stable. In levels of ACh that match Slow Wave Sleep (SWS), no encoded memories are stably recalled. This results from a competition between SFA and excitatory input strength and provides a mechanism for neural networks to control the representation of underlying synaptic information. Finally I show that during the low ACh conditions, oscillatory conditions allow for external inputs to be properly stored in and recalled from synaptic weights. Taken together this work demonstrates that dynamic neuromodulation is critical for the regulation of information processing tasks in neural networks. These results suggest that ACh is capable of switching networks between two distinct information processing modes. Rate coding of information is facilitated during high ACh conditions and phase coding of information is facilitated during low ACh conditions. Finally I propose that ACh levels control whether a network is in one of three functional states: (High ACh; Active waking) optimized for encoding of new information or the stable representation of relevant memories, (Mid ACh; resting state or REM) optimized for encoding connections between currently stored memories or searching the catalog of stored memories, and (Low ACh; NREM) optimized for renormalization of synaptic strength and memory consolidation. This work provides a mechanistic insight into the role of dynamic changes in ACh levels for the encoding, consolidation, and maintenance of memories within the brain.PHDNeuroscienceUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/147503/1/roachjp_1.pd

    Olfactory object recognition, segmentation, adaptation, target seeking, and discrimination by the network of the olfactory bulb and cortex: computational model and experimental data

    Get PDF
    Mammals are poor at individuating the separate components that comprise odor mixtures, but not when components enter environment serially and when there is top-down expectation. Li proposed in 1990 an odor segmentation mechanism using the centrifugal feedback from the olfactory cortex to the olfactory bulb. This feedback suppresses the bulbar responses to the ongoing and already recognized odors so that a subsequent addition of a foreground odor can be singled out for recognition. Additionally, the feedback can depend on context so as to, for example, enhance sensitivity to a target odor or improve discrimination between similar odors. I review experimental data that have since emerged in relation to the computational predictions and implications, and suggest experiments to test the model further

    The response of cortical neurons to in vivo-like input current: theory and experiment: II. Time-varying and spatially distributed inputs

    Get PDF
    The response of a population of neurons to time-varying synaptic inputs can show a rich phenomenology, hardly predictable from the dynamical properties of the membrane's inherent time constants. For example, a network of neurons in a state of spontaneous activity can respond significantly more rapidly than each single neuron taken individually. Under the assumption that the statistics of the synaptic input is the same for a population of similarly behaving neurons (mean field approximation), it is possible to greatly simplify the study of neural circuits, both in the case in which the statistics of the input are stationary (reviewed in La Camera et al. in Biol Cybern, 2008) and in the case in which they are time varying and unevenly distributed over the dendritic tree. Here, we review theoretical and experimental results on the single-neuron properties that are relevant for the dynamical collective behavior of a population of neurons. We focus on the response of integrate-and-fire neurons and real cortical neurons to long-lasting, noisy, in vivo-like stationary inputs and show how the theory can predict the observed rhythmic activity of cultures of neurons. We then show how cortical neurons adapt on multiple time scales in response to input with stationary statistics in vitro. Next, we review how it is possible to study the general response properties of a neural circuit to time-varying inputs by estimating the response of single neurons to noisy sinusoidal currents. Finally, we address the dendrite-soma interactions in cortical neurons leading to gain modulation and spike bursts, and show how these effects can be captured by a two-compartment integrate-and-fire neuron. Most of the experimental results reviewed in this article have been successfully reproduced by simple integrate-and-fire model neuron

    Encoding of Naturalistic Stimuli by Local Field Potential Spectra in Networks of Excitatory and Inhibitory Neurons

    Get PDF
    Recordings of local field potentials (LFPs) reveal that the sensory cortex displays rhythmic activity and fluctuations over a wide range of frequencies and amplitudes. Yet, the role of this kind of activity in encoding sensory information remains largely unknown. To understand the rules of translation between the structure of sensory stimuli and the fluctuations of cortical responses, we simulated a sparsely connected network of excitatory and inhibitory neurons modeling a local cortical population, and we determined how the LFPs generated by the network encode information about input stimuli. We first considered simple static and periodic stimuli and then naturalistic input stimuli based on electrophysiological recordings from the thalamus of anesthetized monkeys watching natural movie scenes. We found that the simulated network produced stimulus-related LFP changes that were in striking agreement with the LFPs obtained from the primary visual cortex. Moreover, our results demonstrate that the network encoded static input spike rates into gamma-range oscillations generated by inhibitory–excitatory neural interactions and encoded slow dynamic features of the input into slow LFP fluctuations mediated by stimulus–neural interactions. The model cortical network processed dynamic stimuli with naturalistic temporal structure by using low and high response frequencies as independent communication channels, again in agreement with recent reports from visual cortex responses to naturalistic movies. One potential function of this frequency decomposition into independent information channels operated by the cortical network may be that of enhancing the capacity of the cortical column to encode our complex sensory environment

    Biophysically grounded mean-field models of neural populations under electrical stimulation

    Full text link
    Electrical stimulation of neural systems is a key tool for understanding neural dynamics and ultimately for developing clinical treatments. Many applications of electrical stimulation affect large populations of neurons. However, computational models of large networks of spiking neurons are inherently hard to simulate and analyze. We evaluate a reduced mean-field model of excitatory and inhibitory adaptive exponential integrate-and-fire (AdEx) neurons which can be used to efficiently study the effects of electrical stimulation on large neural populations. The rich dynamical properties of this basic cortical model are described in detail and validated using large network simulations. Bifurcation diagrams reflecting the network's state reveal asynchronous up- and down-states, bistable regimes, and oscillatory regions corresponding to fast excitation-inhibition and slow excitation-adaptation feedback loops. The biophysical parameters of the AdEx neuron can be coupled to an electric field with realistic field strengths which then can be propagated up to the population description.We show how on the edge of bifurcation, direct electrical inputs cause network state transitions, such as turning on and off oscillations of the population rate. Oscillatory input can frequency-entrain and phase-lock endogenous oscillations. Relatively weak electric field strengths on the order of 1 V/m are able to produce these effects, indicating that field effects are strongly amplified in the network. The effects of time-varying external stimulation are well-predicted by the mean-field model, further underpinning the utility of low-dimensional neural mass models.Comment: A Python package with an implementation of the AdEx mean-field model can be found at https://github.com/neurolib-dev/neurolib - code for simulation and data analysis can be found at https://github.com/caglarcakan/stimulus_neural_population

    The Dynamic Brain in Action: Cortical Oscillations and Coordination Dynamics

    Get PDF
    Cortical oscillations are electrical activities with rhythmic and/or repetitive nature generated spontaneously and in response to stimuli. Study of cortical oscillations has become an area of converging interests since the last two decades and has deepened our understanding of its physiological basis across different behavioral states. Experimental and modeling work has taught us that there is a wide diversity of cellular and circuit mechanisms underlying the generation of cortical rhythms. A wildly diverse set of functions has pertained to synchronous oscillations but their significance in cognition should be better appraised in the more general framework of correlation between spike times of neurons. Oscillations are the core mechanism in adjusting neuronal interactions and shaping temporal coordination of neural activity. In the first part of this thesis, we review essential feature of cortical oscillations in membrane potentials and local field potentials recorded from turtle ex vivo preparation. Then we develop a simple computational model that reproduces the observed features. This modeling investigation suggests a plausible underlying mechanism for rhythmogenesis through cellular and circuit properties. The second part of the thesis is about temporal coordination dynamics quantified by signal and noise correlations. Here, again, we present a computational model to show how temporal coordination and synchronous oscillations can be sewn together. More importantly, what biophysical ingrediants are necessary for a network to reproduce the observed coordination dynamics

    Dynamical principles in neuroscience

    Full text link
    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and Fundación BBVA

    Learning and adaptation in brain machine interfaces

    Get PDF
    Balancing subject learning and decoder adaptation is central to increasing brain machine interface (BMI) performance. We addressed these complementary aspects in two studies: (1) a learning study, in which mice modulated “beta” band activity to control a 1D auditory cursor, and (2) an adaptive decoding study, in which a simple recurrent artificial neural network (RNN) decoded intended saccade targets of monkeys. In the learning study, three mice successfully increased beta band power following trial initiations, and specifically increased beta burst durations from 157 ms to 182 ms, likely contributing to performance. Though the task did not explicitly require specific movements, all three mice appeared to modulate beta activity via active motor control and had consistent vibrissal motor cortex multiunit activity and local field potential relationships with contralateral whisker pad electromyograms. The increased burst durations may therefore by a direct result of increased motor activity. These findings suggest that only a subset of beta rhythm phenomenology can be volitionally modulated (e.g. the tonic “hold” beta), therefore limiting the possible set of successful beta neuromodulation strategies. In the adaptive decoding study, RNNs decoded delay period activity in oculomotor and working memory regions while monkeys performed a delayed saccade task. Adaptive decoding sessions began with brain-controlled trials using pre-trained RNN models, in contrast to static decoding sessions in which 300-500 initial eye-controlled training trials were performed. Closed loop RNN decoding performance was lower than predicted by offline simulations. More consistent delay period activity and saccade paths across trials were associated with higher decoding performance. Despite the advantage of consistency, one monkey’s delay period activity patterns changed over the first week of adaptive decoding, and the other monkey’s saccades were more erratic during adaptive decoding than during static decoding sessions. It is possible that the altered session paradigm eliminating eye-controlled training trials led to either frustration or exploratory learning, causing the neural and behavioral changes. Considering neural control and decoder adaptation of BMIs in these studies, future work should improve the “two-learner” subject-decoder system by better modeling the interaction between underlying brain states (and possibly their modulation) and the neural signatures representing desired outcomes

    Neural correlates of conscious visual processing

    Get PDF
    The objective of the current thesis is to evaluate the role of alpha band activity and neural trial-to-trial variability in conscious visual perception and their relationship to each other. We investigate these measures in electrophysiological recordings of monkeys as well as the electroencephalogram (EEG) of humans using a generalized flash suppression (GFS) paradigm.2021-06-0
    corecore