225 research outputs found

    Two-compartment neuronal spiking model expressing brain-state specific apical-amplification, -isolation and -drive regimes

    Full text link
    There is mounting experimental evidence that brain-state specific neural mechanisms supported by connectomic architectures serve to combine past and contextual knowledge with current, incoming flow of evidence (e.g. from sensory systems). Such mechanisms are distributed across multiple spatial and temporal scales and require dedicated support at the levels of individual neurons and synapses. A prominent feature in the neocortex is the structure of large, deep pyramidal neurons which show a peculiar separation between an apical dendritic compartment and a basal dentritic/peri-somatic compartment, with distinctive patterns of incoming connections and brain-state specific activation mechanisms, namely apical-amplification, -isolation and -drive associated to the wakefulness, deeper NREM sleep stages and REM sleep. The cognitive roles of apical mechanisms have been demonstrated in behaving animals. In contrast, classical models of learning spiking networks are based on single compartment neurons that miss the description of mechanisms to combine apical and basal/somatic information. This work aims to provide the computational community with a two-compartment spiking neuron model which includes features that are essential for supporting brain-state specific learning and with a piece-wise linear transfer function (ThetaPlanes) at highest abstraction level to be used in large scale bio-inspired artificial intelligence systems. A machine learning algorithm, constrained by a set of fitness functions, selected the parameters defining neurons expressing the desired apical mechanisms.Comment: 19 pages, 38 figures, pape

    Intrinsic gain modulation and adaptive neural coding

    Get PDF
    In many cases, the computation of a neural system can be reduced to a receptive field, or a set of linear filters, and a thresholding function, or gain curve, which determines the firing probability; this is known as a linear/nonlinear model. In some forms of sensory adaptation, these linear filters and gain curve adjust very rapidly to changes in the variance of a randomly varying driving input. An apparently similar but previously unrelated issue is the observation of gain control by background noise in cortical neurons: the slope of the firing rate vs current (f-I) curve changes with the variance of background random input. Here, we show a direct correspondence between these two observations by relating variance-dependent changes in the gain of f-I curves to characteristics of the changing empirical linear/nonlinear model obtained by sampling. In the case that the underlying system is fixed, we derive relationships relating the change of the gain with respect to both mean and variance with the receptive fields derived from reverse correlation on a white noise stimulus. Using two conductance-based model neurons that display distinct gain modulation properties through a simple change in parameters, we show that coding properties of both these models quantitatively satisfy the predicted relationships. Our results describe how both variance-dependent gain modulation and adaptive neural computation result from intrinsic nonlinearity.Comment: 24 pages, 4 figures, 1 supporting informatio

    Modeling the coupling of action potential and electrodes

    Get PDF
    The present monograph is a study of pulse propagation in nerves. The main project of this work is modeling and simulation of the action potential propagation in a neuron and its interaction with the electrodes in the context of neurochip application. In the first part, I work with an adapted model of FitzHugh-Nagumo derived from the Hodgkin-Huxley model. The second part was the result of turning the spotlight-on onto the drawbacks of Hodgkin-Huxley model and to bring forth, an alternative model: soliton model. The purpose is to comprehend the role of membrane state in the pulse propagation

    Ion Channel Density Regulates Switches between Regular and Fast Spiking in Soma but Not in Axons

    Get PDF
    The threshold firing frequency of a neuron is a characterizing feature of its dynamical behaviour, in turn determining its role in the oscillatory activity of the brain. Two main types of dynamics have been identified in brain neurons. Type 1 dynamics (regular spiking) shows a continuous relationship between frequency and stimulation current (f-Istim) and, thus, an arbitrarily low frequency at threshold current; Type 2 (fast spiking) shows a discontinuous f-Istim relationship and a minimum threshold frequency. In a previous study of a hippocampal neuron model, we demonstrated that its dynamics could be of both Type 1 and Type 2, depending on ion channel density. In the present study we analyse the effect of varying channel density on threshold firing frequency on two well-studied axon membranes, namely the frog myelinated axon and the squid giant axon. Moreover, we analyse the hippocampal neuron model in more detail. The models are all based on voltage-clamp studies, thus comprising experimentally measurable parameters. The choice of analysing effects of channel density modifications is due to their physiological and pharmacological relevance. We show, using bifurcation analysis, that both axon models display exclusively Type 2 dynamics, independently of ion channel density. Nevertheless, both models have a region in the channel-density plane characterized by an N-shaped steady-state current-voltage relationship (a prerequisite for Type 1 dynamics and associated with this type of dynamics in the hippocampal model). In summary, our results suggest that the hippocampal soma and the two axon membranes represent two distinct kinds of membranes; membranes with a channel-density dependent switching between Type 1 and 2 dynamics, and membranes with a channel-density independent dynamics. The difference between the two membrane types suggests functional differences, compatible with a more flexible role of the soma membrane than that of the axon membrane

    The Interplay of Architecture and Correlated Variability in Neuronal Networks

    Get PDF
    This much is certain: neurons are coupled, and they exhibit covariations in their output. The extent of each does not have a single answer. Moreover, the strength of neuronal correlations, in particular, has been a subject of hot debate within the neuroscience community over the past decade, as advancing recording techniques have made available a lot of new, sometimes seemingly conflicting, datasets. The impact of connectivity and the resulting correlations on the ability of animals to perform necessary tasks is even less well understood. In order to answer relevant questions in these categories, novel approaches must be developed. This work focuses on three somewhat distinct, but inseparably coupled, crucial avenues of research within the broader field of computational neuroscience. First, there is a need for tools which can be applied, both by experimentalists and theorists, to understand how networks transform their inputs. In turn, these tools will allow neuroscientists to tease apart the structure which underlies network activity. The Generalized Thinning and Shift framework, presented in Chapter 4, addresses this need. Next, taking for granted a general understanding of network architecture as well as some grasp of the behavior of its individual units, we must be able to reverse the activity to structure relationship, and understand instead how network structure determines dynamics. We achieve this in Chapters 5 through 7 where we present an application of linear response theory yielding an explicit approximation of correlations in integrate--and--fire neuronal networks. This approximation reveals the explicit relationship between correlations, structure, and marginal dynamics. Finally, we must strive to understand the functional impact of network dynamics and architecture on the tasks that a neural network performs. This need motivates our analysis of a biophysically detailed model of the blow fly visual system in Chapter 8. Our hope is that the work presented here represents significant advances in multiple directions within the field of computational neuroscience.Mathematics, Department o

    Computation in the high-conductance state

    Get PDF
    • …
    corecore