480 research outputs found

    A biophysical observation model for field potentials of networks of leaky integrate-and-fire neurons

    Full text link
    We present a biophysical approach for the coupling of neural network activity as resulting from proper dipole currents of cortical pyramidal neurons to the electric field in extracellular fluid. Starting from a reduced threecompartment model of a single pyramidal neuron, we derive an observation model for dendritic dipole currents in extracellular space and thereby for the dendritic field potential that contributes to the local field potential of a neural population. This work aligns and satisfies the widespread dipole assumption that is motivated by the "open-field" configuration of the dendritic field potential around cortical pyramidal cells. Our reduced three-compartment scheme allows to derive networks of leaky integrate-and-fire models, which facilitates comparison with existing neural network and observation models. In particular, by means of numerical simulations we compare our approach with an ad hoc model by Mazzoni et al. [Mazzoni, A., S. Panzeri, N. K. Logothetis, and N. Brunel (2008). Encoding of naturalistic stimuli by local field potential spectra in networks of excitatory and inhibitory neurons. PLoS Computational Biology 4 (12), e1000239], and conclude that our biophysically motivated approach yields substantial improvement.Comment: 31 pages, 4 figure

    Dwelling Quietly in the Rich Club: Brain Network Determinants of Slow Cortical Fluctuations

    Full text link
    For more than a century, cerebral cartography has been driven by investigations of structural and morphological properties of the brain across spatial scales and the temporal/functional phenomena that emerge from these underlying features. The next era of brain mapping will be driven by studies that consider both of these components of brain organization simultaneously -- elucidating their interactions and dependencies. Using this guiding principle, we explored the origin of slowly fluctuating patterns of synchronization within the topological core of brain regions known as the rich club, implicated in the regulation of mood and introspection. We find that a constellation of densely interconnected regions that constitute the rich club (including the anterior insula, amygdala, and precuneus) play a central role in promoting a stable, dynamical core of spontaneous activity in the primate cortex. The slow time scales are well matched to the regulation of internal visceral states, corresponding to the somatic correlates of mood and anxiety. In contrast, the topology of the surrounding "feeder" cortical regions show unstable, rapidly fluctuating dynamics likely crucial for fast perceptual processes. We discuss these findings in relation to psychiatric disorders and the future of connectomics.Comment: 35 pages, 6 figure

    Mean field modelling of human EEG: application to epilepsy

    Get PDF
    Aggregated electrical activity from brain regions recorded via an electroencephalogram (EEG), reveal that the brain is never at rest, producing a spectrum of ongoing oscillations that change as a result of different behavioural states and neurological conditions. In particular, this thesis focusses on pathological oscillations associated with absence seizures that typically affect 2–16 year old children. Investigation of the cellular and network mechanisms for absence seizures studies have implicated an abnormality in the cortical and thalamic activity in the generation of absence seizures, which have provided much insight to the potential cause of this disease. A number of competing hypotheses have been suggested, however the precise cause has yet to be determined. This work attempts to provide an explanation of these abnormal rhythms by considering a physiologically based, macroscopic continuum mean-field model of the brain's electrical activity. The methodology taken in this thesis is to assume that many of the physiological details of the involved brain structures can be aggregated into continuum state variables and parameters. The methodology has the advantage to indirectly encapsulate into state variables and parameters, many known physiological mechanisms underlying the genesis of epilepsy, which permits a reduction of the complexity of the problem. That is, a macroscopic description of the involved brain structures involved in epilepsy is taken and then by scanning the parameters of the model, identification of state changes in the system are made possible. Thus, this work demonstrates how changes in brain state as determined in EEG can be understood via dynamical state changes in the model providing an explanation of absence seizures. Furthermore, key observations from both the model and EEG data motivates a number of model reductions. These reductions provide approximate solutions of seizure oscillations and a better understanding of periodic oscillations arising from the involved brain regions. Local analysis of oscillations are performed by employing dynamical systems theory which provide necessary and sufficient conditions for their appearance. Finally local and global stability is then proved for the reduced model, for a reduced region in the parameter space. The results obtained in this thesis can be extended and suggestions are provided for future progress in this area

    On the Electrodynamics of Neural Networks

    Get PDF
    We present a microscopic approach for the coupling of cortical activity, as resulting from proper dipole currents of pyramidal neurons, to the electromagnetic field in extracellular fluid in presence of diffusion and Ohmic conduction. Starting from a full-fledged three-compartment model of a single pyramidal neuron, including shunting and dendritic propagation, we derive an observation model for dendritic dipole currents in extracellular space and thereby for the dendritic field potential that contributes to the local field potential of a neural population. Under reasonable simplifications, we then derive a leaky integrate-and-fire model for the dynamics of a neural network, which facilitates comparison with existing neural network and observation models. In particular, we compare our results with a related model by means of numerical simulations. Performing a continuum limit, neural activity becomes represented by a neural field equation, while an observation model for electric field potentials is obtained from the interaction of cortical dipole currents with charge density in non-resistive extracellular space as described by the Nernst-Planck equation. Our work consistently satisfies the widespread dipole assumption discussed in the neuroscientific literature

    Computational Properties of Cerebellar Nucleus Neurons: Effects of Stochastic Ion Channel Gating and Input Location

    Get PDF
    The function of the nervous system is shaped by the refined integration of synaptic inputs taking place at the single neuron level. Gain modulation is a computational principle that is widely used across the brain, in which the response of a neuronal unit to a set of inputs is affected in a multiplicative fashion by a second set of inputs, but without any effect on its selectivity. The arithmetic operations performed by pyramidal cells in cortical brain areas have been well characterised, along with the underlying mechanisms at the level of networks and cells, for instance background synaptic noise and dendritic saturation. However, in spite of the vast amount of research on the cerebellum and its function, little is known about neuronal computations carried out by its cellular components. A particular area of interest are the cerebellar nuclei, the main output gate of the cerebellum to the brain stem and cortical areas. The aim of this thesis is to contribute to an understanding of the arithmetic operations performed by neurons in the cerebellar nuclei. Focus is placed on two putative determinants, the location of the synaptic input and the presence of channel noise. To analyse the effect of channel noise, the known voltage-gated ion channels of a cerebellar nucleus neuron model are translated to stochastic Markov formalisms and their electrophysiologial behaviour is compared to their deterministic Hodgkin-Huxley counterparts. The findings demonstrate that in most cases, the behaviour of stochastic channels matches the reference deterministic models, with the notable exception of voltage-gated channels with fast kinetics. Two potential explanations are suggested for this discrepancy. Firstly, channels with fast kinetics are strongly affected by the artefactual loss of gating events in the simulation that is caused by the use of a finite-length time step. While this effect can be mitigated, in part, by using very small time steps, the second source of simulation artefacts is the rectification of the distribution of open channels, when channel kinetics characteristics allow the generation of a window current, with an temporal-averaged equilibrium close to zero. Further, stochastic gating is implemented in a realistic cerebellar nucleus neuronal model. The resulting stochastic model exhibits probabilistic spiking and a similar output rate as the corresponding deterministic cerebellar nucleus neuronal model. However, the outcomes of this thesis indicate the computational properties of the cerebellar nucleus neuronal model are independent of the presence of ion channel noise. The main result of this thesis is that the synaptic input location determines the single neuron computational properties, both in the cerebellar nucleus and layer Vb pyramidal neuronal models. The extent of multiplication increases systematically with the distance from the soma, for the cerebellar nucleus, but not for the layer Vb pyramidal neuron, where it is smaller than it would be expected for the distance from the soma. For both neurons, the underlying mechanism is related to the combined effect of nonlinearities introduced by dendritic saturation and the synaptic input noise. However, while excitatory inputs in the perisomatic areas in the cerebellar nucleus undergo additive operations and the distal areas multiplicative, in the layer Vb pyramidal neuron the integration of the excitatory driving input is always multiplicative. In addition, the change in gain is sensitive to the synchronicity of the excitatory synaptic input in the layer Vb pyramidal neuron, but not in the cerebellar nucleus neuron. These observations indicate that the same gain control mechanism might be utilized in distinct ways, in different computational contexts and across different areas, based on the neuronal type and its function

    Memristors for the Curious Outsiders

    Full text link
    We present both an overview and a perspective of recent experimental advances and proposed new approaches to performing computation using memristors. A memristor is a 2-terminal passive component with a dynamic resistance depending on an internal parameter. We provide an brief historical introduction, as well as an overview over the physical mechanism that lead to memristive behavior. This review is meant to guide nonpractitioners in the field of memristive circuits and their connection to machine learning and neural computation.Comment: Perpective paper for MDPI Technologies; 43 page

    Recognizing recurrent neural networks (rRNN): Bayesian inference for recurrent neural networks

    Get PDF
    Recurrent neural networks (RNNs) are widely used in computational neuroscience and machine learning applications. In an RNN, each neuron computes its output as a nonlinear function of its integrated input. While the importance of RNNs, especially as models of brain processing, is undisputed, it is also widely acknowledged that the computations in standard RNN models may be an over-simplification of what real neuronal networks compute. Here, we suggest that the RNN approach may be made both neurobiologically more plausible and computationally more powerful by its fusion with Bayesian inference techniques for nonlinear dynamical systems. In this scheme, we use an RNN as a generative model of dynamic input caused by the environment, e.g. of speech or kinematics. Given this generative RNN model, we derive Bayesian update equations that can decode its output. Critically, these updates define a 'recognizing RNN' (rRNN), in which neurons compute and exchange prediction and prediction error messages. The rRNN has several desirable features that a conventional RNN does not have, for example, fast decoding of dynamic stimuli and robustness to initial conditions and noise. Furthermore, it implements a predictive coding scheme for dynamic inputs. We suggest that the Bayesian inversion of recurrent neural networks may be useful both as a model of brain function and as a machine learning tool. We illustrate the use of the rRNN by an application to the online decoding (i.e. recognition) of human kinematics
    • …
    corecore