4,539 research outputs found

    Multi-almost periodicity and invariant basins of general neural networks under almost periodic stimuli

    Full text link
    In this paper, we investigate convergence dynamics of 2N2^N almost periodic encoded patterns of general neural networks (GNNs) subjected to external almost periodic stimuli, including almost periodic delays. Invariant regions are established for the existence of 2N2^N almost periodic encoded patterns under two classes of activation functions. By employing the property of M\mathscr{M}-cone and inequality technique, attracting basins are estimated and some criteria are derived for the networks to converge exponentially toward 2N2^N almost periodic encoded patterns. The obtained results are new, they extend and generalize the corresponding results existing in previous literature.Comment: 28 pages, 4 figure

    Gain control with A-type potassium current: IA as a switch between divisive and subtractive inhibition

    Get PDF
    Neurons process information by transforming barrages of synaptic inputs into spiking activity. Synaptic inhibition suppresses the output firing activity of a neuron, and is commonly classified as having a subtractive or divisive effect on a neuron's output firing activity. Subtractive inhibition can narrow the range of inputs that evoke spiking activity by eliminating responses to non-preferred inputs. Divisive inhibition is a form of gain control: it modifies firing rates while preserving the range of inputs that evoke firing activity. Since these two "modes" of inhibition have distinct impacts on neural coding, it is important to understand the biophysical mechanisms that distinguish these response profiles. We use simulations and mathematical analysis of a neuron model to find the specific conditions for which inhibitory inputs have subtractive or divisive effects. We identify a novel role for the A-type Potassium current (IA). In our model, this fast-activating, slowly- inactivating outward current acts as a switch between subtractive and divisive inhibition. If IA is strong (large maximal conductance) and fast (activates on a time-scale similar to spike initiation), then inhibition has a subtractive effect on neural firing. In contrast, if IA is weak or insufficiently fast-activating, then inhibition has a divisive effect on neural firing. We explain these findings using dynamical systems methods to define how a spike threshold condition depends on synaptic inputs and IA. Our findings suggest that neurons can "self-regulate" the gain control effects of inhibition via combinations of synaptic plasticity and/or modulation of the conductance and kinetics of A-type Potassium channels. This novel role for IA would add flexibility to neurons and networks, and may relate to recent observations of divisive inhibitory effects on neurons in the nucleus of the solitary tract.Comment: 20 pages, 11 figure

    Gain Control With A-Type Potassium Current: IA As A Switch Between Divisive And Subtractive Inhibition

    Get PDF
    Neurons process and convey information by transforming barrages of synaptic inputs into spiking activity. Synaptic inhibition typically suppresses the output firing activity of a neuron, and is commonly classified as having a subtractive or divisive effect on a neuron’s output firing activity. Subtractive inhibition can narrow the range of inputs that evoke spiking activity by eliminating responses to non-preferred inputs. Divisive inhibition is a form of gain control: it modifies firing rates while preserving the range of inputs that evoke firing activity. Since these two “modes” of inhibition have distinct impacts on neural coding, it is important to understand the biophysical mechanisms that distinguish these response profiles. In this study, we use simulations and mathematical analysis of a neuron model to find the specific conditions (parameter sets) for which inhibitory inputs have subtractive or divisive effects. Significantly, we identify a novel role for the A-type Potassium current (IA). In our model, this fast-activating, slowly-inactivating outward current acts as a switch between subtractive and divisive inhibition. In particular, if IA is strong (large maximal conductance) and fast (activates on a time-scale similar to spike initiation), then inhibition has a subtractive effect on neural firing. In contrast, if IA is weak or insufficiently fast-activating, then inhibition has a divisive effect on neural firing. We explain these findings using dynamical systems methods (plane analysis and fast-slow dissection) to define how a spike threshold condition depends on synaptic inputs and IA. Our findings suggest that neurons can “self-regulate” the gain control effects of inhibition via combinations of synaptic plasticity and/or modulation of the conductance and kinetics of A-type Potassium channels. This novel role for IA would add flexibility to neurons and networks, and may relate to recent observations of divisive inhibitory effects on neurons in the nucleus of the solitary tract

    A Vector Matrix Real Time Backpropagation Algorithm for Recurrent neural networks That Approximate Multi-valued Periodic Functions

    Full text link
    Unlike feedforward neural networks (FFNN) which can act as universal function approximators, recursive, or recurrent, neural networks can act as universal approximators for multi-valued functions. In this paper, a real time recursive backpropagation (RTRBP) algorithm in a vector matrix form is developed for a two-layer globally recursive neural network that has multiple delays in its feedback path. This algorithm has been evaluated on two GRNNs that approximate both an analytic and nonanalytic periodic multi-valued function that a feedforward neural network is not capable of approximating

    The Power of Linear Recurrent Neural Networks

    Full text link
    Recurrent neural networks are a powerful means to cope with time series. We show how a type of linearly activated recurrent neural networks, which we call predictive neural networks, can approximate any time-dependent function f(t) given by a number of function values. The approximation can effectively be learned by simply solving a linear equation system; no backpropagation or similar methods are needed. Furthermore, the network size can be reduced by taking only most relevant components. Thus, in contrast to others, our approach not only learns network weights but also the network architecture. The networks have interesting properties: They end up in ellipse trajectories in the long run and allow the prediction of further values and compact representations of functions. We demonstrate this by several experiments, among them multiple superimposed oscillators (MSO), robotic soccer, and predicting stock prices. Predictive neural networks outperform the previous state-of-the-art for the MSO task with a minimal number of units.Comment: 22 pages, 14 figures and tables, revised implementatio
    • …
    corecore