814 research outputs found

    Wild oscillations in a nonlinear neuron model with resets: (II) Mixed-mode oscillations

    Full text link
    This work continues the analysis of complex dynamics in a class of bidimensional nonlinear hybrid dynamical systems with resets modeling neuronal voltage dynamics with adaptation and spike emission. We show that these models can generically display a form of mixed-mode oscillations (MMOs), which are trajectories featuring an alternation of small oscillations with spikes or bursts (multiple consecutive spikes). The mechanism by which these are generated relies fundamentally on the hybrid structure of the flow: invariant manifolds of the continuous dynamics govern small oscillations, while discrete resets govern the emission of spikes or bursts, contrasting with classical MMO mechanisms in ordinary differential equations involving more than three dimensions and generally relying on a timescale separation. The decomposition of mechanisms reveals the geometrical origin of MMOs, allowing a relatively simple classification of points on the reset manifold associated to specific numbers of small oscillations. We show that the MMO pattern can be described through the study of orbits of a discrete adaptation map, which is singular as it features discrete discontinuities with unbounded left- and right-derivatives. We study orbits of the map via rotation theory for discontinuous circle maps and elucidate in detail complex behaviors arising in the case where MMOs display at most one small oscillation between each consecutive pair of spikes

    On Dynamics of Integrate-and-Fire Neural Networks with Conductance Based Synapses

    Get PDF
    We present a mathematical analysis of a networks with Integrate-and-Fire neurons and adaptive conductances. Taking into account the realistic fact that the spike time is only known within some \textit{finite} precision, we propose a model where spikes are effective at times multiple of a characteristic time scale δ\delta, where δ\delta can be \textit{arbitrary} small (in particular, well beyond the numerical precision). We make a complete mathematical characterization of the model-dynamics and obtain the following results. The asymptotic dynamics is composed by finitely many stable periodic orbits, whose number and period can be arbitrary large and can diverge in a region of the synaptic weights space, traditionally called the "edge of chaos", a notion mathematically well defined in the present paper. Furthermore, except at the edge of chaos, there is a one-to-one correspondence between the membrane potential trajectories and the raster plot. This shows that the neural code is entirely "in the spikes" in this case. As a key tool, we introduce an order parameter, easy to compute numerically, and closely related to a natural notion of entropy, providing a relevant characterization of the computational capabilities of the network. This allows us to compare the computational capabilities of leaky and Integrate-and-Fire models and conductance based models. The present study considers networks with constant input, and without time-dependent plasticity, but the framework has been designed for both extensions.Comment: 36 pages, 9 figure

    Entrainment and chaos in a pulse-driven Hodgkin-Huxley oscillator

    Full text link
    The Hodgkin-Huxley model describes action potential generation in certain types of neurons and is a standard model for conductance-based, excitable cells. Following the early work of Winfree and Best, this paper explores the response of a spontaneously spiking Hodgkin-Huxley neuron model to a periodic pulsatile drive. The response as a function of drive period and amplitude is systematically characterized. A wide range of qualitatively distinct responses are found, including entrainment to the input pulse train and persistent chaos. These observations are consistent with a theory of kicked oscillators developed by Qiudong Wang and Lai-Sang Young. In addition to general features predicted by Wang-Young theory, it is found that most combinations of drive period and amplitude lead to entrainment instead of chaos. This preference for entrainment over chaos is explained by the structure of the Hodgkin-Huxley phase resetting curve.Comment: Minor revisions; modified Fig. 3; added reference

    Neuron as a reward-modulated combinatorial switch and a model of learning behavior

    Full text link
    This paper proposes a neuronal circuitry layout and synaptic plasticity principles that allow the (pyramidal) neuron to act as a "combinatorial switch". Namely, the neuron learns to be more prone to generate spikes given those combinations of firing input neurons for which a previous spiking of the neuron had been followed by a positive global reward signal. The reward signal may be mediated by certain modulatory hormones or neurotransmitters, e.g., the dopamine. More generally, a trial-and-error learning paradigm is suggested in which a global reward signal triggers long-term enhancement or weakening of a neuron's spiking response to the preceding neuronal input firing pattern. Thus, rewards provide a feedback pathway that informs neurons whether their spiking was beneficial or detrimental for a particular input combination. The neuron's ability to discern specific combinations of firing input neurons is achieved through a random or predetermined spatial distribution of input synapses on dendrites that creates synaptic clusters that represent various permutations of input neurons. The corresponding dendritic segments, or the enclosed individual spines, are capable of being particularly excited, due to local sigmoidal thresholding involving voltage-gated channel conductances, if the segment's excitatory and absence of inhibitory inputs are temporally coincident. Such nonlinear excitation corresponds to a particular firing combination of input neurons, and it is posited that the excitation strength encodes the combinatorial memory and is regulated by long-term plasticity mechanisms. It is also suggested that the spine calcium influx that may result from the spatiotemporal synaptic input coincidence may cause the spine head actin filaments to undergo mechanical (muscle-like) contraction, with the ensuing cytoskeletal deformation transmitted to the axon initial segment where it may...Comment: Version 5: added computer code in the ancillary files sectio

    Optimal Population Codes for Space: Grid Cells Outperform Place Cells

    Get PDF
    Rodents use two distinct neuronal coordinate systems to estimate their position: place fields in the hippocampus and grid fields in the entorhinal cortex. Whereas place cells spike at only one particular spatial location, grid cells fire at multiple sites that correspond to the points of an imaginary hexagonal lattice. We study how to best construct place and grid codes, taking the probabilistic nature of neural spiking into account. Which spatial encoding properties of individual neurons confer the highest resolution when decoding the animal’s position from the neuronal population response? A priori, estimating a spatial position from a grid code could be ambiguous, as regular periodic lattices possess translational symmetry. The solution to this problem requires lattices for grid cells with different spacings; the spatial resolution crucially depends on choosing the right ratios of these spacings across the population. We compute the expected error in estimating the position in both the asymptotic limit, using Fisher information, and for low spike counts, using maximum likelihood estimation. Achieving high spatial resolution and covering a large range of space in a grid code leads to a trade-off: the best grid code for spatial resolution is built of nested modules with different spatial periods, one inside the other, whereas maximizing the spatial range requires distinct spatial periods that are pairwisely incommensurate. Optimizing the spatial resolution predicts two grid cell properties that have been experimentally observed. First, short lattice spacings should outnumber long lattice spacings. Second, the grid code should be self-similar across different lattice spacings, so that the grid field always covers a fixed fraction of the lattice period. If these conditions are satisfied and the spatial “tuning curves” for each neuron span the same range of firing rates, then the resolution of the grid code easily exceeds that of the best possible place code with the same number of neurons

    On the interaction of gamma-rhythmic neuronal populations

    Full text link
    Local gamma-band (~30-100Hz) oscillations in the brain, produced by feedback inhibition on a characteristic timescale, appear in multiple areas of the brain and are associated with a wide range of cognitive functions. Some regions producing gamma also receive gamma-rhythmic input, and the interaction and coordination of these rhythms has been hypothesized to serve various functional roles. This thesis consists of three stand-alone chapters, each of which considers the response of a gamma-rhythmic neuronal circuit to input in an analytical framework. In the first, we demonstrate that several related models of a gamma-generating circuit under periodic forcing are asymptotically drawn onto an attracting invariant torus due to the convergence of inhibition trajectories at spikes and the convergence of voltage trajectories during sustained inhibition, and therefore display a restricted range of dynamics. In the second, we show that a model of a gamma-generating circuit under forcing by square pulses cannot maintain multiple stably phase-locked solutions. In the third, we show that a separation of time scales of membrane potential dynamics and synaptic decay causes the gamma model to phase align its spiking such that periodic forcing pulses arrive under minimal inhibition. When two of these models are mutually coupled, the same effect causes excitatory pulses from the faster oscillator to arrive at the slower under minimal inhibition, while pulses from the slower to the faster arrive under maximal inhibition. We also show that such a time scale separation allows the model to respond sensitively to input pulse coherence to an extent that is not possible for a simple one-dimensional oscillator. We draw on a wide range of mathematical tools and structures including return maps, saltation matrices, contraction methods, phase response formalism, and singular perturbation theory in order to show that the neuronal mechanism of gamma oscillations is uniquely suited to reliably phase lock across brain regions and facilitate the selective transmission of information

    The cerebellum could solve the motor error problem through error increase prediction

    Get PDF
    We present a cerebellar architecture with two main characteristics. The first one is that complex spikes respond to increases in sensory errors. The second one is that cerebellar modules associate particular contexts where errors have increased in the past with corrective commands that stop the increase in error. We analyze our architecture formally and computationally for the case of reaching in a 3D environment. In the case of motor control, we show that there are synergies of this architecture with the Equilibrium-Point hypothesis, leading to novel ways to solve the motor error problem. In particular, the presence of desired equilibrium lengths for muscles provides a way to know when the error is increasing, and which corrections to apply. In the context of Threshold Control Theory and Perceptual Control Theory we show how to extend our model so it implements anticipative corrections in cascade control systems that span from muscle contractions to cognitive operations.Comment: 34 pages (without bibliography), 13 figure
    corecore