105 research outputs found

    Channel noise from both slow adaptation currents and fast currents is required to explain spike-response variability in a sensory neuron

    Get PDF
    Spike-timing variability has a large effect on neural information processing. However, for many systems little is known about the noise sources causing the spike-response variability. Here we investigate potential sources of spike-response variability in auditory receptor neurons of locusts, a classic insect model system. At low-spike frequencies, our data show negative interspike-interval (ISI) correlations and ISI distributions that match the inverse Gaussian distribution. These findings can be explained by a white-noise source that interacts with an adaptation current. At higher spike frequencies, more strongly peaked distributions and positive ISI correlations appear, as expected from a canonical model of suprathreshold firing driven by temporally correlated (i.e., colored) noise. Simulations of a minimal conductance-based model of the auditory receptor neuron with stochastic ion channels exclude the delayed rectifier as a possible noise source. Our analysis suggests channel noise from an adaptation current and the receptor or sodium current as main sources for the colored and white noise, respectively. By comparing the ISI statistics with generic models, we find strong evidence for two distinct noise sources. Our approach does not involve any dendritic or somatic recordings that may harm the delicate workings of many sensory systems. It could be applied to various other types of neurons, in which channel noise dominates the fluctuations that shape the neuron's spike statistics

    Neural Dynamics as Sampling: A Model for Stochastic Computation in Recurrent Networks of Spiking Neurons

    Get PDF
    The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons

    Multivariate Multiscale Analysis of Neural Spike Trains

    Get PDF
    This dissertation introduces new methodologies for the analysis of neural spike trains. Biological properties of the nervous system, and how they are reflected in neural data, can motivate specific analytic tools. Some of these biological aspects motivate multiscale frameworks, which allow for simultaneous modelling of the local and global behaviour of neurons. Chapter 1 provides the preliminary background on the biology of the nervous system and details the concept of information and randomness in the analysis of the neural spike trains. It also provides the reader with a thorough literature review on the current statistical models in the analysis of neural spike trains. The material presented in the next six chapters (2-7) have been the focus of three papers, which have either already been published or are being prepared for publication. It is demonstrated in Chapters 2 and 3 that the multiscale complexity penalized likelihood method, introduced in Kolaczyk and Nowak (2004), is a powerful model in the simultaneous modelling of spike trains with biological properties from different time scales. To detect the periodic spiking activities of neurons, two periodic models from the literature, Bickel et al. (2007, 2008); Shao and Li (2011), were combined and modified in a multiscale penalized likelihood model. The contributions of these chapters are (1) employinh a powerful visualization tool, inter-spike interval (ISI) plot, (2) combining the multiscale method of Kolaczyk and Nowak (2004) with the periodic models ofBickel et al. (2007, 2008) and Shao and Li (2011), to introduce the so-called additive and multiplicative models for the intensity function of neural spike trains and introducing a cross-validation scheme to estimate their tuning parameters, (3) providing the numerical bootstrap confidence bands for the multiscale estimate of the intensity function, and (4) studying the effect of time-scale on the statistical properties of spike counts. Motivated by neural integration phenomena, as well as the adjustments for the neural refractory period, Chapters 4 and 5 study the Skellam process and introduce the Skellam Process with Resetting (SPR). Introducing SPR and its application in the analysis of neural spike trains is one of the major contributions of this dissertation. This stochastic process is biologically plausible, and unlike the Poisson process, it does not suffer from limited dependency structure. It also has multivariate generalizations for the simultaneous analysis of multiple spike trains. A computationally efficient recursive algorithm for the estimation of the parameters of SPR is introduced in Chapter 5. Except for the literature review at the beginning of Chapter 4, the rest of the material within these two chapters is original. The specific contributions of Chapters 4 and 5 are (1) introducing the Skellam Process with Resetting as a statistical tool to analyze neural spike trains and studying its properties, including all theorems and lemmas provided in Chapter 4, (2) the two fairly standard definitions of the Skellam process (homogeneous and inhomogeneous) and the proof of their equivalency, (3) deriving the likelihood function based on the observable data (spike trains) and developing a computationally efficient recursive algorithm for parameter estimation, and (4) studying the effect of time scales on the SPR model. The challenging problem of multivariate analysis of the neural spike trains is addressed in Chapter 6. As far as we know, the multivariate models which are available in the literature suffer from limited dependency structures. In particular, modelling negative correlation among spike trains is a challenging problem. To address this issue, the multivariate Skellam distribution, as well as the multivariate Skellam process, which both have flexible dependency structures, are developed. Chapter 5 also introduces a multivariate version of Skellam Process with Resetting (MSPR), and a so-called profile-moment likelihood estimation of its parameters. This chapter generalizes the results of Chapter 4 and 5, and therefore, except for the brief literature review provided at the beginning of the chapter, the remainder of the material is original work. In particular, the contributions of this chapter are (1) introducing multivariate Skellam distribution, (2) introducing two definitions of the Multivariate Skellam process in both homogeneous and inhomogeneous cases and proving their equivalence, (3) introducing Multivariate Skellam Process with Resetting (MSPR) to simultaneously model spike trains from an ensemble of neurons, and (4) utilizing the so-called profile-moment likelihood method to compute estimates of the parameters of MSPR. The discussion of the developed methodologies as well as the ``next steps'' are outlined in Chapter 7

    Models of Causal Inference in the Elasmobranch Electrosensory System: How Sharks Find Food

    Get PDF
    We develop a theory of how the functional design of the electrosensory system in sharks reflects the inevitability of noise in high-precision measurements, and how the Central Nervous System may have developed an efficient solution to the problem of inferring parameters of stimulus sources, such as their location, via Bayesian neural computation. We use Finite Element Method to examine how the electrical properties of shark tissues and the geometrical configuration of both the shark body and the electrosensory array, act to focus weak electric fields in the aquatic environment, so that the majority of the voltage drop is signalled across the electrosensory cells. We analyse snapshots of two ethologically relevant stimuli: localized prey-like dipole electric sources, and uniform electric fields resembling motion-induced and other fields encountered in the ocean. We demonstrated that self movement (or self state) not only affects the measured field, by perturbing the self field, but also affects the external field. Electrosensory cells provide input to central brain regions via primary afferent nerves. Inspection of elasmobranch electrosensory afferent spike trains and inter-spike interval distributions indicates that they typically have fairly regular spontaneous inter-spike intervals with skewed Gaussian-like variability. However, because electrosensory afferent neurons converge onto secondary neurons, we consider the convergent input a "super afferent" with the pulse train received by a target neuron approaching a Poisson process with shorter mean intervals as the number of independent convergent spike trains increases. We implement a spiking neural particle filter which takes simulated electrosensory "super afferent" spike trains and can successfully infer the fixed Poisson parameter, or the equivalent real world state, distance to a source. The circuit obtained by converting the mathematical model to a network structure bears a striking resemblance to the cerebellar-like hindbrain circuits of the dorsal octavolateral nucleus. The elasmobranchs’ ability to sense electric fields down to a limit imposed by thermodynamics seems extraordinary. However we predict that the theories presented here generalize to other sensory systems, particularly the other octavolateralis senses which share cerebellar-like circuitry, suggesting that the cerebellum itself also plays a role in dynamic state estimation

    Neuronal oscillations: from single-unit activity to emergent dynamics and back

    Get PDF
    L’objectiu principal d’aquesta tesi és avançar en la comprensió del processament d’informació en xarxes neuronals en presència d’oscil lacions subumbrals. La majoria de neurones propaguen la seva activitat elèctrica a través de sinapsis químiques que són activades, exclusivament, quan el corrent elèctric que les travessa supera un cert llindar. És per aquest motiu que les descàrregues ràpides i intenses produïdes al soma neuronal, els anomenats potencials d’acció, són considerades la unitat bàsica d’informació neuronal, és a dir, el senyal mínim i necessari per a iniciar la comunicació entre dues neurones. El codi neuronal és entès, doncs, com un llenguatge binari que expressa qualsevol missatge (estímul sensorial, memòries, etc.) en un tren de potencials d’acció. Tanmateix, cap funció cognitiva rau en la dinàmica d’una única neurona. Circuits de milers de neurones connectades entre sí donen lloc a determinats ritmes, palesos en registres d’activitat colectiva com els electroencefalogrames (EEG) o els potencials de camp local (LFP). Si els potencials d’acció de cada cèl lula, desencadenats per fluctuacions estocàstiques de les corrents sinàptiques, no assolissin un cert grau de sincronia, no apareixeria aquesta periodicitat a nivell de xarxa. Per tal de poder entendre si aquests ritmes intervenen en el codi neuronal hem estudiat tres situacions. Primer, en el Capítol 2, hem mostrat com una cadena oberta de neurones amb un potencial de membrana intrínsecament oscil latori filtra un senyal periòdic arribant per un dels extrems. La resposta de cada neurona (pulsar o no pulsar) depèn de la seva fase, de forma que cada una d’elles rep un missatge filtrat per la precedent. A més, cada potencial d’acció presinàptic provoca un canvi de fase en la neurona postsinàptica que depèn de la seva posició en l’espai de fases. Els períodes d’entrada capaços de sincronitzar les oscil lacions subumbrals són aquells que mantenen la fase d’arribada dels potencials d’acció fixa al llarg de la cadena. Per tal de què el missatge arribi intacte a la darrera neurona cal, a més a més, que aquesta fase permeti la descàrrega del voltatge transmembrana. En segon cas, hem estudiat una xarxa neuronal amb connexions tant a veïns propers com de llarg abast, on les oscil lacions subumbrals emergeixen de l’activitat col lectiva reflectida en els corrents sinàptics (o equivalentment en el LFP). Les neurones inhibidores aporten un ritme a l’excitabilitat de la xarxa, és a dir, que els episodis en què la inhibició és baixa, la probabilitat d’una descàrrega global de la població neuronal és alta. En el Capítol 3 mostrem com aquest ritme implica l’aparició d’una bretxa en la freqüència de descàrrega de les neurones: o bé polsen espaiadament en el temps o bé en ràfegues d’elevada intensitat. La fase del LFP determina l’estat de la xarxa neuronal codificant l’activitat de la població: els mínims indiquen la descàrrega simultània de moltes neurones que, ocasionalment, han superat el llindar d’excitabilitat degut a un decreixement global de la inhibició, mentre que els màxims indiquen la coexistència de ràfegues en diferents punts de la xarxa degut a decreixements locals de la inhibició en estats globals d’excitació. Aquesta dinàmica és possible gràcies al domini de la inhibició sobre l’excitació. En el Capítol 4 considerem acoblament entre dues xarxes neuronals per tal d’estudiar la interacció entre ritmes diferents. Les oscil lacions indiquen recurrència en la sincronització de l’activitat col lectiva, de manera que durant aquestes finestres temporals una població optimitza el seu impacte en una xarxa diana. Quan el ritme de la població receptora i el de l’emissora difereixen significativament, l’eficiència en la comunicació decreix, ja que les fases de màxima resposta de cada senyal LFP no mantenen una diferència constant entre elles. Finalment, en el Capítol 5 hem estudiat com les oscil lacions col lectives pròpies de l’estat de son donen lloc al fenomen de coherència estocàstica. Per a una intensitat òptima del soroll, modulat per l’excitabilitat de la xarxa, el LFP assoleix una regularitat màxima donant lloc a un període refractari de la població neuronal. En resum, aquesta Tesi mostra escenaris d’interacció entre els potencials d’acció, característics de la dinàmica de neurones individuals, i les oscil lacions subumbrals, fruit de l’acoblament entre les cèl lules i ubiqües en la dinàmica de poblacions neuronals. Els resultats obtinguts aporten funcionalitat a aquests ritmes emergents, agents sincronitzadors i moduladors de les descàrregues neuronals i reguladors de la comunicació entre xarxes neuronals.The main objective of this thesis is to better understand information processing in neuronal networks in the presence of subthreshold oscillations. Most neurons propagate their electrical activity via chemical synapses, which are only activated when the electric current that passes through them surpasses a certain threshold. Therefore, fast and intense discharges produced at the neuronal soma (the action potentials or spikes) are considered the basic unit of neuronal information. The neuronal code is understood, then, as a binary language that expresses any message (sensory stimulus, memories, etc.) in a train of action potentials. Circuits of thousands of interconnected neurons give rise to certain rhythms, revealed in collective activity measures such as electroencephalograms (EEG) and local field potentials (LFP). Synchronization of action potentials of each cell, triggered by stochastic fluctuations of the synaptic currents, cause this periodicity at the network level.To understand whether these rhythms are involved in the neuronal code we studied three situations. First, in Chapter 2, we showed how an open chain of neurons with an intrinsically oscillatory membrane potential filters a periodic signal coming from one of its ends. The response of each neuron (to spike or not) depends on its phase, so that each cell receives a message filtered by the preceding one. Each presynaptic action potential causes a phase change in the postsynaptic neuron, which depends on its position in the phase space. Those incoming periods that are able to synchronize the subthreshold oscillations, keep the phase of arrival of action potentials fixed along the chain. The original message reaches intact the last neuron provided that this phase allows the discharge of the transmembrane voltage.I the second case, we studied a neuronal network with connections to both long range and close neighbors, in which the subthreshold oscillations emerge from the collective activity apparent in the synaptic currents. The inhibitory neurons provide a rhythm to the excitability of the network. When inhibition is low, the likelihood of a global discharge of the neuronal population is high. In Chapter 3 we show how this rhythm causes a gap in the discharge frequency of neurons: either they pulse single spikes or they fire bursts of high intensity. The LFP phase determines the state of the neuronal network, coding the activity of the population: its minima indicate the simultaneous discharge of many neurons, while its maxima indicate the coexistence of bursts due to local decreases of inhibition at global states of excitation. In Chapter 4 we consider coupling between two neural networks in order to study the interaction between different rhythms. The oscillations indicate recurrence in the synchronization of collective activity, so that during these time windows a population optimizes its impact on a target network. When the rhythm of the emitter and receiver population differ significantly, the communication efficiency decreases as the phases of maximum response of each LFP signal do not maintain a constant difference between them.Finally, in Chapter 5 we studied how oscillations typical of the collective sleep state give rise to stochastic coherence. For an optimal noise intensity, modulated by the excitability of the network, the LFP reaches a maximal regularity leading to a refractory period of the neuronal population.In summary, this Thesis shows scenarios of interaction between action potentials, characteristics of the dynamics of individual neurons, and the subthreshold oscillations, outcome of the coupling between the cells and ubiquitous in the dynamics of neuronal populations . The results obtained provide functionality to these emerging rhythms, triggers of synchronization and modulator agents of the neuronal discharges and regulators of the communication between neuronal networks

    Information Encoding by Individual Neurons and Groups of Neurons in the Primary Visual Cortex

    Get PDF
    How is information about visual stimuli encoded into the responses of neurons in the cerebral cortex? In this thesis, I describe the analysis of data recorded simultaneously from groups of up to eight nearby neurons in the primary visual cortices of anesthetized macaque monkeys. The goal is to examine the degree to which visual information is encoded into the times of action potentials in those responses (as opposed to the overall rate), and also into the identity of the neuron that fires each action potential (as opposed to the average activity across a group of nearby neurons). The data are examined with techniques modified from systems analysis, statistics, and information theory. The results are compared with expectations from simple statistical models of action-potential firing and from models that are more physiologically realistic. The major findings are: (1) that cortical responses are not renewal processes with time-varying firing rates, which means that information can indeed be encoded in the detailed timing of action potentials; (2) that these neurons encode the contrast of visual stimuli primarily into the time difference between stimulus and response onset, which is known as the latency; (3) that this so-called temporal coding serves as a mechanism by which the brain might discriminate among stimuli that evoke similar firing rates; (4) that action potentials preceded by interspike intervals of different durations can encode different features of a stimulus; (5) that the rate of overall information transmission can depend on the type of stimulus in a manner that differs from one neuron to the next; (6) that the rate at which information is transmitted specifically about stimulus contrast depends little on stimulus type; (7) that a substantial fraction of the information rate can be confounded among multiple stimulus attributes; and, most importantly, (8) that averaging together the responses of multiple nearby neurons leads to a significant loss of information that increases as more neurons are considered. These results should serve as a basis for direct investigation into the cellular mechanisms by which the brain extracts and processes the information carried in neuronal responses

    Self-organized Criticality in Neural Networks by Inhibitory and Excitatory Synaptic Plasticity

    Get PDF
    Neural networks show intrinsic ongoing activity even in the absence of information processing and task-driven activities. This spontaneous activity has been reported to have specific characteristics ranging from scale-free avalanches in microcircuits to the power-law decay of the power spectrum of oscillations in coarse-grained recordings of large populations of neurons. The emergence of scale-free activity and power-law distributions of observables has encouraged researchers to postulate that the neural system is operating near a continuous phase transition. At such a phase transition, changes in control parameters or the strength of the external input lead to a change in the macroscopic behavior of the system. On the other hand, at a critical point due to critical slowing down, the phenomenological mesoscopic modeling of the system becomes realizable. Two distinct types of phase transitions have been suggested as the operating point of the neural system, namely active-inactive and synchronous-asynchronous phase transitions. In contrast to normal phase transitions in which a fine-tuning of the control parameter(s) is required to bring the system to the critical point, neural systems should be supplemented with self-tuning mechanisms that adaptively adjust the system near to the critical point (or critical region) in the phase space. In this work, we introduce a self-organized critical model of the neural network. We consider dynamics of excitatory and inhibitory (EI) sparsely connected populations of spiking leaky integrate neurons with conductance-based synapses. Ignoring inhomogeneities and internal fluctuations, we first analyze the mean-field model. We choose the strength of the external excitatory input and the average strength of excitatory to excitatory synapses as control parameters of the model and analyze the bifurcation diagram of the mean-field equations. We focus on bifurcations at the low firing rate regime in which the quiescent state loses stability due to Saddle-node or Hopf bifurcations. In particular, at the Bogdanov-Takens (BT) bifurcation point which is the intersection of the Hopf bifurcation and Saddle-node bifurcation lines of the 2D dynamical system, the network shows avalanche dynamics with power-law avalanche size and duration distributions. This matches the characteristics of low firing spontaneous activity in the cortex. By linearizing gain functions and excitatory and inhibitory nullclines, we can approximate the location of the BT bifurcation point. This point in the control parameter phase space corresponds to the internal balance of excitation and inhibition and a slight excess of external excitatory input to the excitatory population. Due to the tight balance of average excitation and inhibition currents, the firing of the individual cells is fluctuation-driven. Around the BT point, the spiking of neurons is a Poisson process and the population average membrane potential of neurons is approximately at the middle of the operating interval [VRest,Vth][V_{Rest}, V_{th}]. Moreover, the EI network is close to both oscillatory and active-inactive phase transition regimes. Next, we consider self-tuning of the system at this critical point. The self-organizing parameter in our network is the balance of opposing forces of inhibitory and excitatory populations' activities and the self-organizing mechanisms are long-term synaptic plasticity and short-term depression of the synapses. The former tunes the overall strength of excitatory and inhibitory pathways to be close to a balanced regime of these currents and the latter which is based on the finite amount of resources in brain areas, act as an adaptive mechanism that tunes micro populations of neurons subjected to fluctuating external inputs to attain the balance in a wider range of external input strengths. Using the Poisson firing assumption, we propose a microscopic Markovian model which captures the internal fluctuations in the network due to the finite size and matches the macroscopic mean-field equation by coarse-graining. Near the critical point, a phenomenological mesoscopic model for excitatory and inhibitory fields of activity is possible due to the time scale separation of slowly changing variables and fast degrees of freedom. We will show that the mesoscopic model corresponding to the neural field model near the local Bogdanov-Takens bifurcation point matches Langevin's description of the directed percolation process. Tuning the system at the critical point can be achieved by coupling fast population dynamics with slow adaptive gain and synaptic weight dynamics, which make the system wander around the phase transition point. Therefore, by introducing short-term and long-term synaptic plasticity, we have proposed a self-organized critical stochastic neural field model.:1. Introduction 1.1. Scale-free Spontaneous Activity 1.1.1. Nested Oscillations in the Macro-scale Collective Activity 1.1.2. Up and Down States Transitions 1.1.3. Avalanches in Local Neuronal Populations 1.2. Criticality and Self-organized Criticality in Systems out of Equilibrium 1.2.1. Sandpile Models 1.2.2. Directed Percolation 1.3. Critical Neural Models 1.3.1. Self-Organizing Neural Automata 1.3.2. Criticality in the Mesoscopic Models of Cortical Activity 1.4. Balance of Inhibition and Excitation 1.5. Functional Benefits of Being in the Critical State 1.6. Arguments Against the Critical State of the Brain 1.7. Organization of the Current Work 2. Single Neuron Model 2.1. Impulse Response of the Neuron 2.2. Response of the Neuron to the Constant Input 2.3. Response of the Neuron to the Poisson Input 2.3.1. Potential Distribution of a Neuron Receiving Poisson Input 2.3.2. Firing Rate and Interspike intervals’ CV Near the Threshold 2.3.3. Linear Poisson Neuron Approximation 3. Interconnected Homogeneous Population of Excitatory and Inhibitory Neurons 3.1. Linearized Nullclines and Different Dynamic Regimes 3.2. Logistic Function Approximation of Gain Functions 3.3. Dynamics Near the BT Bifurcation Point 3.4. Avalanches in the Region Close to the BT Point 3.5. Stability Analysis of the Fixed Points in the Linear Regime 3.6. Characteristics of Avalanches 4. Long Term and Short Term Synaptic Plasticity rules Tune the EI Population Close to the BT Bifurcation Point 4.1. Long Term Synaptic Plasticity by STDP Tunes Synaptic Weights Close to the Balanced State 4.2. Short-term plasticity and Up-Down states transition 5. Interconnected network of EI populations: Wilson-Cowan Neural Field Model 6. Stochastic Neural Field 6.1. Finite size fluctuations in a single EI population 6.2. Stochastic Neural Field with a Tuning Mechanism to the Critical State 7. Conclusio
    corecore