4 research outputs found

    Subtractive, divisive and non-monotonic gain control in feedforward nets linearized by noise and delays

    Get PDF
    The control of input-to-output mappings, or gain control, is one of the main strategies used by neural networks for the processing and gating of information. Using a spiking neural network model, we studied the gain control induced by a form of inhibitory feedforward circuitry—also known as “open-loop feedback”—, which has been experimentally observed in a cerebellum-like structure in weakly electric fish. We found, both analytically and numerically, that this network displays three different regimes of gain control: subtractive, divisive, and non-monotonic. Subtractive gain control was obtained when noise is very low in the network. Also, it was possible to change from divisive to non-monotonic gain control by simply modulating the strength of the feedforward inhibition, which may be achieved via long-term synaptic plasticity. The particular case of divisive gain control has been previously observed in vivo in weakly electric fish. These gain control regimes were robust to the presence of temporal delays in the inhibitory feedforward pathway, which were found to linearize the input-to-output mappings (or f-I curves) via a novel variability-increasing mechanism. Our findings highlight the feedforward-induced gain control analyzed here as a highly versatile mechanism of information gating in the brain

    Gain control with A-type potassium current: IA as a switch between divisive and subtractive inhibition

    Get PDF
    Neurons process information by transforming barrages of synaptic inputs into spiking activity. Synaptic inhibition suppresses the output firing activity of a neuron, and is commonly classified as having a subtractive or divisive effect on a neuron's output firing activity. Subtractive inhibition can narrow the range of inputs that evoke spiking activity by eliminating responses to non-preferred inputs. Divisive inhibition is a form of gain control: it modifies firing rates while preserving the range of inputs that evoke firing activity. Since these two "modes" of inhibition have distinct impacts on neural coding, it is important to understand the biophysical mechanisms that distinguish these response profiles. We use simulations and mathematical analysis of a neuron model to find the specific conditions for which inhibitory inputs have subtractive or divisive effects. We identify a novel role for the A-type Potassium current (IA). In our model, this fast-activating, slowly- inactivating outward current acts as a switch between subtractive and divisive inhibition. If IA is strong (large maximal conductance) and fast (activates on a time-scale similar to spike initiation), then inhibition has a subtractive effect on neural firing. In contrast, if IA is weak or insufficiently fast-activating, then inhibition has a divisive effect on neural firing. We explain these findings using dynamical systems methods to define how a spike threshold condition depends on synaptic inputs and IA. Our findings suggest that neurons can "self-regulate" the gain control effects of inhibition via combinations of synaptic plasticity and/or modulation of the conductance and kinetics of A-type Potassium channels. This novel role for IA would add flexibility to neurons and networks, and may relate to recent observations of divisive inhibitory effects on neurons in the nucleus of the solitary tract.Comment: 20 pages, 11 figure

    Bifurcation Analysis of Large Networks of Neurons

    Get PDF
    The human brain contains on the order of a hundred billion neurons, each with several thousand synaptic connections. Computational neuroscience has successfully modeled both the individual neurons as various types of oscillators, in addition to the synaptic coupling between the neurons. However, employing the individual neuronal models as a large coupled network on the scale of the human brain would require massive computational and financial resources, and yet is the current undertaking of several research groups. Even if one were to successfully model such a complicated system of coupled differential equations, aside from brute force numerical simulations, little insight may be gained into how the human brain solves problems or performs tasks. Here, we introduce a tool that reduces large networks of coupled neurons to a much smaller set of differential equations that governs key statistics for the network as a whole, as opposed to tracking the individual dynamics of neurons and their connections. This approach is typically referred to as a mean-field system. As the mean-field system is derived from the original network of neurons, it is predictive for the behavior of the network as a whole and the parameters or distributions of parameters that appear in the mean-field system are identical to those of the original network. As such, bifurcation analysis is predictive for the behavior of the original network and predicts where in the parameter space the network transitions from one behavior to another. Additionally, here we show how networks of neurons can be constructed with a mean-field or macroscopic behavior that is prescribed. This occurs through an analytic extension of the Neural Engineering Framework (NEF). This can be thought of as an inverse mean-field approach, where the networks are constructed to obey prescribed dynamics as opposed to deriving the macroscopic dynamics from an underlying network. Thus, the work done here analyzes neuronal networks through both top-down and bottom-up approaches
    corecore