10 research outputs found

    Difference in chaos suppression in sparsely-connected E-I network.

    No full text
    λ1 as a function of I1 for common and independent inputs, showing a monotonic decrease with I1 and a larger zero-crossing for common input. This result is qualitatively similar to that obtained in the single population network with negative mean coupling (Fig 2). Error bars indicate ±2 std, lines are a guide for the eye. Increasing the excitatory efficacy α increases λ1 for both common and independent input (α ∈ {0, 0.5, 0.7}). Model parameters (parameters defined as in [16] for constant input and WI1 and WE1 are the modulation amplitudes of the input to the excitatory and inhibitory population): NE = NI = 3500, K = 700, g = 1.6, , , , , , , WE1 = gαI1, WI1 = 0.44gI1, f = 0.2/τ.</p

    Difference in chaos suppression increases with network size, tightness of balance, and near the transition to chaos.

    No full text
    A) Dependence of on network size N. With common input, for large N, but is constant for independent input. Error bars indicate interquartile range around the median. B) Dependence of on ‘tightness of balance’ parameter K, which scales both I0 and J0. Results for large K are the same as in A but for small K, the network is no longer in the balanced regime, and results for common and independent input become similar. Error bars indicate ±2 std. C) Dependence of on gain parameter g for low input frequency f. Close to , an arbitrarily small independent input can suppress chaos; this is not the case with common input. The quasi-static approximation (dotted) and DMFT (dashed) results coincide. Error bars indicate ±2 std. Model parameters: I0 = J0 = 1 in A and C; g = 2, f = 0.2/τ in A and B; , in B; f = 0.01/τ in C, N = 5000 in B and C.</p

    Common input impedes learning in balanced networks.

    No full text
    A) Schematic of the training setup. A ‘student network’ (S) is trained to autonomously generate the output , by matching its recurrent inputs to those of a driven ‘teacher network’, whose weights are not changed during training. B) λ1 in the teacher network as a function of I1. C) Test error in the student network as a function of I1. Critical input amplitude is indicated by vertical dashed lines. Consistent with the difference in , the teacher networks driven with common input require a larger I1 to achieve small test errors in the student network. Error bars indicate interquartile range around the median. D) Top: Target output (green) and actual output z (dashed orange) for two input amplitudes I1 ∈ {5, 15}. Bottom: Firing rate ϕ(hi) for two example neurons in teacher network with common input (green full line) and student network (orange dotted line) for two input amplitudes. E) Scatter plot of test error as a function of λ1 for each network realization in A and B, with both common and independent input. When chaos in the teacher network is not suppressed (λ1 > 0), test error is high. Training is successful (small test error) when targets are strong enough to suppress chaos in the teacher network. Training is terminated when error reaches below 10−2. Model parameters: N = 500, g = 2, I0 = J0 = 1, ϕ(x) = max(x, 0) in both teacher and student networks; f = 0.2/τ in the teacher network inputs and target .</p

    No qualitative difference in chaos suppression by common vs independent input in canonical random networks.

    No full text
    A) as a function of input frequency f ( light color, g = 2 dark color). has a minimum for both common and independent input. The independent input case is identical to the scenario studied in [3]. At high f, the low-pass filter effect of the leak term attenuates the external input for both cases, thus resulting in a linearly increasing . B) Dependence of on the gain parameter g for both low input frequency (f = 0.01/τ, dark color) and high input frequency (f = 0.2/τ, light color), showing a monotonic increase. Error bars indicate ±2 std. Model parameters: N = 5000, }, f ∈ {0.01, 0.2}/τ, I0 = J0 = 0.</p

    Activity, population firing rate and autocorrelations of balanced networks with independent input.

    No full text
    A) Firing rates ϕi(t) = ϕ(hi(t)) of three example units. B) Mean population firing rate ν(t). C) Autocorrelation function with no external input (I1 = 0). D)-F) Same as A-C but for input amplitude of I1 = 0.8; activity remains chaotic. G)-I) Same as A-C but for stronger input (I1 = 10); activity is fully controlled by the external input and is no longer chaotic. Dashed lines (middle and right columns) are results of stationary DMFT, full lines are median across 10 network realizations. Model parameters: N = 5000, g = 2, f = 0.05/τ, I0 = J0 = 1.</p

    Mechanism of chaos suppression with slowly varying common input.

    No full text
    A) External input (dashed) and recurrent input (solid) for three example neurons. B) Synaptic currents hi for four example neurons. C) Local Lyapunov exponent from network simulation, which reflects the local exponential growth rates between nearby trajectories (solid), and Lyapunov exponent from stationary DMFT (dashed) used in quasi-static approximation. When , external input periodically becomes negative and silences the recurrent activity (gray bars). During these silent episodes, the network is no longer chaotic and . When the input is positive, dynamics remains chaotic and on average. Model parameters: N = 5000, g = 2, f = 0.01/Ï„, I0 = J0 = 1.</p

    Largest Lyapunov exponent shows different chaos suppression for common vs independent input.

    No full text
    Largest Lyapunov exponent λ1 as a function of input modulation amplitude I1 for common (green) and independent (violet) input. are the zero-crossings of λ1 and thus the minimum I1 required to suppress chaotic dynamics. With common input, λ1 crosses zero at a much larger I1. Dots with error bars are numerical simulations, dashed lines are largest Lyapunov exponents computed by dynamic mean-field theory (DMFT). Error bars indicate ±2 std across 10 network realizations. Model parameters: N = 5000, g = 2, f = 0.2/τ, I0 = J0 = 1.</p

    Suppression of chaos in balanced networks with common vs independent input.

    No full text
    A) Common input: External input consist of a positive static input and a sinusoidally time-varying input with identical phase across neurons. B) Independent input: External input consist of a positive static input and a sinusoidally time-varying input with a random phase for each neuron. C) External inputs (top), recurrent feedback and their population average (thick line) (middle), and synaptic currents (bottom) for three example neurons. Recurrent feedback has a strong time-varying component that is anti-correlated with the external input, resulting in cancellation. D) Same as in C, but for independent input. Here, no cancellation occurs and the network is entrained into a forced limit cycle. Throughout this work, green (violet) refers to common (independent) input. Model parameters: N = 5000, g = 2, f = 0.01/Ï„, I0 = J0 = 1, I1 = 6.</p

    Activity, population firing rate and autocorrelations of balanced networks with common input.

    No full text
    A) Firing rates ϕi(t) = ϕ(hi(t)) of three example units. B) Mean population firing rate ν(t). C) Time-averaged two-time autocorrelation function (Eq 5) as a function of time difference with no external input (I1 = 0). D-F) Same as A-C but for input amplitude ; activity remains chaotic. G-I Same as A-C but for stronger input (); activity is entrained by the external input and is no longer chaotic. Dashed lines (middle and right columns) are results of non-stationary DMFT, full lines are median across 10 network realizations. Model parameters: N = 5000, g = 2, f = 0.05/τ, I0 = J0 = 1.</p

    Dynamic mean-field theory captures frequency-dependent effects on the suppression of chaos.

    No full text
    A) as a function of input frequency f (g = 1.6 light color, g = 2 dark color). has a minimum that is captured by the non-stationary DMFT (dashed green line) but not by the quasi-static approximation (dotted green line), which does not depend on frequency f. At high f, the low-pass filter effect of the leak term attenuates the external input modulation for both cases, thus resulting in a linearly increasing . B) Dependence of on the gain parameter g for high input frequency (f = 0.2/τ), showing a monotonic increase. The non-stationary DMFT results are in good agreement with numerical simulations. For comparison, we include the result of the quasi-static approximation (dotted green line), which shows a more gradual dependence on g and applies only at low frequencies (see Fig 3). Error bars indicate ±2 std. Model parameters: N = 5000, g = 2, f = 0.2/τ, I0 = J0 = 1.</p
    corecore