2,210 research outputs found

    On Dynamics of Integrate-and-Fire Neural Networks with Conductance Based Synapses

    Get PDF
    We present a mathematical analysis of a networks with Integrate-and-Fire neurons and adaptive conductances. Taking into account the realistic fact that the spike time is only known within some \textit{finite} precision, we propose a model where spikes are effective at times multiple of a characteristic time scale δ\delta, where δ\delta can be \textit{arbitrary} small (in particular, well beyond the numerical precision). We make a complete mathematical characterization of the model-dynamics and obtain the following results. The asymptotic dynamics is composed by finitely many stable periodic orbits, whose number and period can be arbitrary large and can diverge in a region of the synaptic weights space, traditionally called the "edge of chaos", a notion mathematically well defined in the present paper. Furthermore, except at the edge of chaos, there is a one-to-one correspondence between the membrane potential trajectories and the raster plot. This shows that the neural code is entirely "in the spikes" in this case. As a key tool, we introduce an order parameter, easy to compute numerically, and closely related to a natural notion of entropy, providing a relevant characterization of the computational capabilities of the network. This allows us to compare the computational capabilities of leaky and Integrate-and-Fire models and conductance based models. The present study considers networks with constant input, and without time-dependent plasticity, but the framework has been designed for both extensions.Comment: 36 pages, 9 figure

    Controlling chaos in diluted networks with continuous neurons

    Full text link
    Diluted neural networks with continuous neurons and nonmonotonic transfer function are studied, with both fixed and dynamic synapses. A noisy stimulus with periodic variance results in a mechanism for controlling chaos in neural systems with fixed synapses: a proper amount of external perturbation forces the system to behave periodically with the same period as the stimulus.Comment: 11 pages, 8 figure

    Analysis of various steady states and transient phenomena in digital maps : foundation for theory construction and engineering applications

    Get PDF
    研究成果の概要 (和文) : デジタルマップ(Dmap)の解析と実装に関して以下のような成果を得た。まず、周期軌道の豊富さと安定性に関する特徴量を用いた解析法を考案し、典型例を解析し、現象の基本的な分類を行った。次に、簡素な進化計算によって所望のDmapを合成するアルゴリズムを考案した。アルゴリズムの個体はDmapに対応し、個体数は柔軟に変化する。典型的な例題によってアルゴリズムの妥当性を確認した。さらに、Dmapをデジタルスパイキングニューロン(DSN)によって実現する方法を構築した。DSNは2つのシフトレジスタと配線回路で構成され、様々なスパイク列を生成する。FPGAによる簡素な試作回路を構成し、動作を確認した。研究成果の概要 (英文) : We have studied analysis and implementation of digital maps (Dmaps). The major results are as the following. First, we have developed an analysis method based on two feature quantities. The first quantity characterizes plentifulness of periodic orbits and the second quantity characterizes stability of the periodic orbits. Applying the method, typical Dmap examples are analyzed and basic phenomena are classified. Second, we have developed a simple evolutionary algorithm to realize a desired Dmap. The algorithm uses individuals each of which corresponds to one Dmap and the number of individuals can vary flexibly. Using typical example problems, the algorithm efficiency is confirmed. Third, we have developed a realization method of Dmaps by means of digital spiking neurons (DSNs). The DSN consists of two shift registers connected by a wiring circuit and can generate various periodic spike-trains. Presenting a FPGA based simple test circuit, the DSN dynamics is confirmed

    To which extend is the "neural code" a metric ?

    Get PDF
    Here is proposed a review of the different choices to structure spike trains, using deterministic metrics. Temporal constraints observed in biological or computational spike trains are first taken into account. The relation with existing neural codes (rate coding, rank coding, phase coding, ..) is then discussed. To which extend the "neural code" contained in spike trains is related to a metric appears to be a key point, a generalization of the Victor-Purpura metric family being proposed for temporal constrained causal spike trainsComment: 5 pages 5 figures Proceeding of the conference NeuroComp200

    How neural networks learn to classify chaotic time series

    Full text link
    Neural networks are increasingly employed to model, analyze and control non-linear dynamical systems ranging from physics to biology. Owing to their universal approximation capabilities, they regularly outperform state-of-the-art model-driven methods in terms of accuracy, computational speed, and/or control capabilities. On the other hand, neural networks are very often they are taken as black boxes whose explainability is challenged, among others, by huge amounts of trainable parameters. In this paper, we tackle the outstanding issue of analyzing the inner workings of neural networks trained to classify regular-versus-chaotic time series. This setting, well-studied in dynamical systems, enables thorough formal analyses. We focus specifically on a family of networks dubbed Large Kernel Convolutional Neural Networks (LKCNN), recently introduced by Boull\'{e} et al. (2021). These non-recursive networks have been shown to outperform other established architectures (e.g. residual networks, shallow neural networks and fully convolutional networks) at this classification task. Furthermore, they outperform ``manual'' classification approaches based on direct reconstruction of the Lyapunov exponent. We find that LKCNNs use qualitative properties of the input sequence. In particular, we show that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models. Low performing models show, in fact, analogous periodic activations to random untrained models. This could give very general criteria for identifying, a priori, trained models that have poor accuracy

    How neural networks learn to classify chaotic time series

    Get PDF
    We tackle the outstanding issue of analyzing the inner workings of neural networks trained to classify regular-vs-chaotic time series. This setting, well-studied in dynamical systems, enables thorough formal analyses. We focus specifically on a family of networks dubbed large Kernel convolutional neural networks (LKCNNs), recently introduced by Boullé et al. [403, 132261 (2021)]. These non-recursive networks have been shown to outperform other established architectures (e.g., residual networks, shallow neural networks, and fully convolutional networks) at this classification task. Furthermore, they outperform “manual” classification approaches based on direct reconstruction of the Lyapunov exponent. We find that LKCNNs use qualitative properties of the input sequence. We show that LKCNN models trained from random weight initialization, end in two most common performance groups: one with relatively low performance (⁠0.72 average classification accuracy) and one with high classification performance (⁠0.94 average classification accuracy). Notably, the models in the low performance class display periodic activations that are qualitatively similar to those exhibited by LKCNNs with random weights. This could give very general criteria for identifying, a priori, trained weights that yield poor accuracy

    How neural networks learn to classify chaotic time series

    Get PDF
    We tackle the outstanding issue of analyzing the inner workings of neural networks trained to classify regular-vs-chaotic time series. This setting, well-studied in dynamical systems, enables thorough formal analyses. We focus specifically on a family of networks dubbed large Kernel convolutional neural networks (LKCNNs), recently introduced by Boullé et al. [403, 132261 (2021)]. These non-recursive networks have been shown to outperform other established architectures (e.g., residual networks, shallow neural networks, and fully convolutional networks) at this classification task. Furthermore, they outperform “manual” classification approaches based on direct reconstruction of the Lyapunov exponent. We find that LKCNNs use qualitative properties of the input sequence. We show that LKCNN models trained from random weight initialization, end in two most common performance groups: one with relatively low performance (⁠0.72 average classification accuracy) and one with high classification performance (⁠0.94 average classification accuracy). Notably, the models in the low performance class display periodic activations that are qualitatively similar to those exhibited by LKCNNs with random weights. This could give very general criteria for identifying, a priori, trained weights that yield poor accuracy
    corecore