225 research outputs found
Two-compartment neuronal spiking model expressing brain-state specific apical-amplification, -isolation and -drive regimes
There is mounting experimental evidence that brain-state specific neural
mechanisms supported by connectomic architectures serve to combine past and
contextual knowledge with current, incoming flow of evidence (e.g. from sensory
systems). Such mechanisms are distributed across multiple spatial and temporal
scales and require dedicated support at the levels of individual neurons and
synapses. A prominent feature in the neocortex is the structure of large, deep
pyramidal neurons which show a peculiar separation between an apical dendritic
compartment and a basal dentritic/peri-somatic compartment, with distinctive
patterns of incoming connections and brain-state specific activation
mechanisms, namely apical-amplification, -isolation and -drive associated to
the wakefulness, deeper NREM sleep stages and REM sleep. The cognitive roles of
apical mechanisms have been demonstrated in behaving animals. In contrast,
classical models of learning spiking networks are based on single compartment
neurons that miss the description of mechanisms to combine apical and
basal/somatic information. This work aims to provide the computational
community with a two-compartment spiking neuron model which includes features
that are essential for supporting brain-state specific learning and with a
piece-wise linear transfer function (ThetaPlanes) at highest abstraction level
to be used in large scale bio-inspired artificial intelligence systems. A
machine learning algorithm, constrained by a set of fitness functions, selected
the parameters defining neurons expressing the desired apical mechanisms.Comment: 19 pages, 38 figures, pape
Intrinsic gain modulation and adaptive neural coding
In many cases, the computation of a neural system can be reduced to a
receptive field, or a set of linear filters, and a thresholding function, or
gain curve, which determines the firing probability; this is known as a
linear/nonlinear model. In some forms of sensory adaptation, these linear
filters and gain curve adjust very rapidly to changes in the variance of a
randomly varying driving input. An apparently similar but previously unrelated
issue is the observation of gain control by background noise in cortical
neurons: the slope of the firing rate vs current (f-I) curve changes with the
variance of background random input. Here, we show a direct correspondence
between these two observations by relating variance-dependent changes in the
gain of f-I curves to characteristics of the changing empirical
linear/nonlinear model obtained by sampling. In the case that the underlying
system is fixed, we derive relationships relating the change of the gain with
respect to both mean and variance with the receptive fields derived from
reverse correlation on a white noise stimulus. Using two conductance-based
model neurons that display distinct gain modulation properties through a simple
change in parameters, we show that coding properties of both these models
quantitatively satisfy the predicted relationships. Our results describe how
both variance-dependent gain modulation and adaptive neural computation result
from intrinsic nonlinearity.Comment: 24 pages, 4 figures, 1 supporting informatio
Recommended from our members
Nonlinear resonance and excitability in interconnected systems
Engineering design amounts to develop components and interconnect them to obtain a desired behaviour. While in the context of equilibrium dynamics there is a well-developed theory that can account for robustness and optimality in this process, we still lack a corresponding methodology for nonequilibrium dynamics and in particular oscillatory behaviours. With the aim of fostering such a theory, this thesis studies two basic interconnections in the contexts of nonlinear resonance and excitability, two phenomena with the potential of encompassing a large number of applications.
The first interconnection is considered in the context of vibration absorption. It corresponds to coupling two Duffing oscillators, the prototypical example of nonlinear resonator. Of primary interest is the frequency response of the system, which quantifies the behaviour in presence of harmonic forces. The analysis focuses on how isolated families of solutions appear and merge with a main one. Using singularity theory it is possible to organise these solutions in the space of parameters and delimit their presence through numerical methods.
The second interconnection studied in this dissertation appears in the context of excitable circuits. Combining a fast excitable system and a slower oscillatory system that share a similar structure naturally leads to bursting. The resulting system has a slow-fast structure that can be leveraged in the analysis. The first step of this analysis is a novel slow-fast model of bistability between a rest state and a spiking attractor. Following this, the analysis moves to the complete interconnection, and in particular on how it can generate different patterns of bursting activity
Modeling the coupling of action potential and electrodes
The present monograph is a study of pulse propagation in nerves. The main project of this work is modeling and simulation of the action potential propagation in a neuron and its interaction with the electrodes in the context of neurochip application. In the first part, I work with an adapted model of FitzHugh-Nagumo derived from the Hodgkin-Huxley model. The second part was the result of turning the spotlight-on onto the drawbacks of Hodgkin-Huxley model and to bring forth, an alternative model: soliton model. The purpose is to comprehend the role of membrane state in the pulse propagation
Ion Channel Density Regulates Switches between Regular and Fast Spiking in Soma but Not in Axons
The threshold firing frequency of a neuron is a characterizing feature of its dynamical behaviour, in turn determining its role in the oscillatory activity of the brain. Two main types of dynamics have been identified in brain neurons. Type 1 dynamics (regular spiking) shows a continuous relationship between frequency and stimulation current (f-Istim) and, thus, an arbitrarily low frequency at threshold current; Type 2 (fast spiking) shows a discontinuous f-Istim relationship and a minimum threshold frequency. In a previous study of a hippocampal neuron model, we demonstrated that its dynamics could be of both Type 1 and Type 2, depending on ion channel density. In the present study we analyse the effect of varying channel density on threshold firing frequency on two well-studied axon membranes, namely the frog myelinated axon and the squid giant axon. Moreover, we analyse the hippocampal neuron model in more detail. The models are all based on voltage-clamp studies, thus comprising experimentally measurable parameters. The choice of analysing effects of channel density modifications is due to their physiological and pharmacological relevance. We show, using bifurcation analysis, that both axon models display exclusively Type 2 dynamics, independently of ion channel density. Nevertheless, both models have a region in the channel-density plane characterized by an N-shaped steady-state current-voltage relationship (a prerequisite for Type 1 dynamics and associated with this type of dynamics in the hippocampal model). In summary, our results suggest that the hippocampal soma and the two axon membranes represent two distinct kinds of membranes; membranes with a channel-density dependent switching between Type 1 and 2 dynamics, and membranes with a channel-density independent dynamics. The difference between the two membrane types suggests functional differences, compatible with a more flexible role of the soma membrane than that of the axon membrane
The Interplay of Architecture and Correlated Variability in Neuronal Networks
This much is certain: neurons are coupled, and they exhibit covariations in their output. The extent of each does not have
a single answer. Moreover,
the strength of neuronal
correlations, in particular, has been a subject of hot debate within the neuroscience community
over the past decade, as advancing recording techniques have made available a lot of new,
sometimes seemingly conflicting, datasets.
The impact of connectivity and the resulting correlations on the ability of animals to perform
necessary tasks is even less well understood.
In order to answer
relevant questions in these categories, novel approaches must be developed.
This work focuses on three somewhat distinct, but inseparably coupled,
crucial avenues of research within the broader field of computational neuroscience.
First, there is a need for tools which can be applied, both by experimentalists and theorists,
to understand how networks transform their inputs. In turn, these tools will allow neuroscientists to tease apart the structure which
underlies network activity. The Generalized Thinning and Shift framework, presented in
Chapter 4, addresses this need.
Next, taking for granted a general understanding of network
architecture as well as some grasp of the behavior of its individual units, we must be able to reverse the activity to structure relationship, and understand instead how network structure
determines dynamics.
We achieve this in Chapters 5 through 7 where we present an application of linear response theory yielding an explicit approximation of correlations in integrate--and--fire neuronal
networks. This approximation
reveals the explicit relationship between correlations, structure, and marginal dynamics.
Finally, we must strive to understand the functional impact of network dynamics and
architecture on the tasks that a neural network performs. This need
motivates our analysis of a biophysically detailed model of the blow fly visual system in Chapter 8.
Our hope is that the work presented here represents significant advances in multiple directions within the field of computational neuroscience.Mathematics, Department o
- …