4,732 research outputs found

    Neuroevolution on the Edge of Chaos

    Full text link
    Echo state networks represent a special type of recurrent neural networks. Recent papers stated that the echo state networks maximize their computational performance on the transition between order and chaos, the so-called edge of chaos. This work confirms this statement in a comprehensive set of experiments. Furthermore, the echo state networks are compared to networks evolved via neuroevolution. The evolved networks outperform the echo state networks, however, the evolution consumes significant computational resources. It is demonstrated that echo state networks with local connections combine the best of both worlds, the simplicity of random echo state networks and the performance of evolved networks. Finally, it is shown that evolution tends to stay close to the ordered side of the edge of chaos.Comment: To appear in Proceedings of the Genetic and Evolutionary Computation Conference 2017 (GECCO '17

    On Dynamics of Integrate-and-Fire Neural Networks with Conductance Based Synapses

    Get PDF
    We present a mathematical analysis of a networks with Integrate-and-Fire neurons and adaptive conductances. Taking into account the realistic fact that the spike time is only known within some \textit{finite} precision, we propose a model where spikes are effective at times multiple of a characteristic time scale δ\delta, where δ\delta can be \textit{arbitrary} small (in particular, well beyond the numerical precision). We make a complete mathematical characterization of the model-dynamics and obtain the following results. The asymptotic dynamics is composed by finitely many stable periodic orbits, whose number and period can be arbitrary large and can diverge in a region of the synaptic weights space, traditionally called the "edge of chaos", a notion mathematically well defined in the present paper. Furthermore, except at the edge of chaos, there is a one-to-one correspondence between the membrane potential trajectories and the raster plot. This shows that the neural code is entirely "in the spikes" in this case. As a key tool, we introduce an order parameter, easy to compute numerically, and closely related to a natural notion of entropy, providing a relevant characterization of the computational capabilities of the network. This allows us to compare the computational capabilities of leaky and Integrate-and-Fire models and conductance based models. The present study considers networks with constant input, and without time-dependent plasticity, but the framework has been designed for both extensions.Comment: 36 pages, 9 figure

    Intrinsic adaptation in autonomous recurrent neural networks

    Full text link
    A massively recurrent neural network responds on one side to input stimuli and is autonomously active, on the other side, in the absence of sensory inputs. Stimuli and information processing depends crucially on the qualia of the autonomous-state dynamics of the ongoing neural activity. This default neural activity may be dynamically structured in time and space, showing regular, synchronized, bursting or chaotic activity patterns. We study the influence of non-synaptic plasticity on the default dynamical state of recurrent neural networks. The non-synaptic adaption considered acts on intrinsic neural parameters, such as the threshold and the gain, and is driven by the optimization of the information entropy. We observe, in the presence of the intrinsic adaptation processes, three distinct and globally attracting dynamical regimes, a regular synchronized, an overall chaotic and an intermittent bursting regime. The intermittent bursting regime is characterized by intervals of regular flows, which are quite insensitive to external stimuli, interseeded by chaotic bursts which respond sensitively to input signals. We discuss these finding in the context of self-organized information processing and critical brain dynamics.Comment: 24 pages, 8 figure

    Dynamic Adaptive Computation: Tuning network states to task requirements

    Get PDF
    Neural circuits are able to perform computations under very diverse conditions and requirements. The required computations impose clear constraints on their fine-tuning: a rapid and maximally informative response to stimuli in general requires decorrelated baseline neural activity. Such network dynamics is known as asynchronous-irregular. In contrast, spatio-temporal integration of information requires maintenance and transfer of stimulus information over extended time periods. This can be realized at criticality, a phase transition where correlations, sensitivity and integration time diverge. Being able to flexibly switch, or even combine the above properties in a task-dependent manner would present a clear functional advantage. We propose that cortex operates in a "reverberating regime" because it is particularly favorable for ready adaptation of computational properties to context and task. This reverberating regime enables cortical networks to interpolate between the asynchronous-irregular and the critical state by small changes in effective synaptic strength or excitation-inhibition ratio. These changes directly adapt computational properties, including sensitivity, amplification, integration time and correlation length within the local network. We review recent converging evidence that cortex in vivo operates in the reverberating regime, and that various cortical areas have adapted their integration times to processing requirements. In addition, we propose that neuromodulation enables a fine-tuning of the network, so that local circuits can either decorrelate or integrate, and quench or maintain their input depending on task. We argue that this task-dependent tuning, which we call "dynamic adaptive computation", presents a central organization principle of cortical networks and discuss first experimental evidence.Comment: 6 pages + references, 2 figure

    Transient Information Flow in a Network of Excitatory and Inhibitory Model Neurons: Role of Noise and Signal Autocorrelation

    Get PDF
    We investigate the performance of sparsely-connected networks of integrate-and-fire neurons for ultra-short term information processing. We exploit the fact that the population activity of networks with balanced excitation and inhibition can switch from an oscillatory firing regime to a state of asynchronous irregular firing or quiescence depending on the rate of external background spikes. We find that in terms of information buffering the network performs best for a moderate, non-zero, amount of noise. Analogous to the phenomenon of stochastic resonance the performance decreases for higher and lower noise levels. The optimal amount of noise corresponds to the transition zone between a quiescent state and a regime of stochastic dynamics. This provides a potential explanation on the role of non-oscillatory population activity in a simplified model of cortical micro-circuits.Comment: 27 pages, 7 figures, to appear in J. Physiology (Paris) Vol. 9
    corecore