163,859 research outputs found

    Synchronization in model networks of class I neurons

    Get PDF
    We study a modification of the Hoppensteadt-Izhikevich canonical model for networks of class I neurons, in which the 'pulse' emitted by a neuron is smooth rather than a delta-function. We prove two types of results about synchronization and desynchronization of such networks, the first type pertaining to 'pulse' functions which are symmetric, and the other type in the regime in which each neuron is connected to many other neurons

    An Adaptive Locally Connected Neuron Model: Focusing Neuron

    Full text link
    This paper presents a new artificial neuron model capable of learning its receptive field in the topological domain of inputs. The model provides adaptive and differentiable local connectivity (plasticity) applicable to any domain. It requires no other tool than the backpropagation algorithm to learn its parameters which control the receptive field locations and apertures. This research explores whether this ability makes the neuron focus on informative inputs and yields any advantage over fully connected neurons. The experiments include tests of focusing neuron networks of one or two hidden layers on synthetic and well-known image recognition data sets. The results demonstrated that the focusing neurons can move their receptive fields towards more informative inputs. In the simple two-hidden layer networks, the focusing layers outperformed the dense layers in the classification of the 2D spatial data sets. Moreover, the focusing networks performed better than the dense networks even when 70%\% of the weights were pruned. The tests on convolutional networks revealed that using focusing layers instead of dense layers for the classification of convolutional features may work better in some data sets.Comment: 45 pages, a national patent filed, submitted to Turkish Patent Office, No: -2017/17601, Date: 09.11.201

    Leader neurons in leaky integrate and fire neural network simulations

    Full text link
    Several experimental studies show the existence of leader neurons in population bursts of 2D living neural networks. A leader neuron is, basically, a neuron which fires at the beginning of a burst (respectively network spike) more often that we expect by looking at its whole mean neural activity. This means that leader neurons have some burst triggering power beyond a simple statistical effect. In this study, we characterize these leader neuron properties. This naturally leads us to simulate neural 2D networks. To build our simulations, we choose the leaky integrate and fire (lIF) neuron model. Our lIF model has got stable leader neurons in the burst population that we simulate. These leader neurons are excitatory neurons and have a low membrane potential firing threshold. Except for these two first properties, the conditions required for a neuron to be a leader neuron are difficult to identify and seem to depend on several parameters involved in the simulations themself. However, a detailed linear analysis shows a trend of the properties required for a neuron to be a leader neuron. Our main finding is: A leader neuron sends a signal to many excitatory neurons as well as to a few inhibitory neurons and a leader neuron receives only a few signals from other excitatory neurons. Our linear analysis exhibits five essential properties for leader neurons with relative importance. This means that considering a given neural network with a fixed mean number of connections per neuron, our analysis gives us a way of predicting which neuron can be a good leader neuron and which cannot. Our prediction formula gives us a good statistical prediction even if, considering a single given neuron, the success rate does not reach hundred percent.Comment: 25 pages, 13 figures, 2 table

    Stochastic IMT (insulator-metal-transition) neurons: An interplay of thermal and threshold noise at bifurcation

    Full text link
    Artificial neural networks can harness stochasticity in multiple ways to enable a vast class of computationally powerful models. Electronic implementation of such stochastic networks is currently limited to addition of algorithmic noise to digital machines which is inherently inefficient; albeit recent efforts to harness physical noise in devices for stochasticity have shown promise. To succeed in fabricating electronic neuromorphic networks we need experimental evidence of devices with measurable and controllable stochasticity which is complemented with the development of reliable statistical models of such observed stochasticity. Current research literature has sparse evidence of the former and a complete lack of the latter. This motivates the current article where we demonstrate a stochastic neuron using an insulator-metal-transition (IMT) device, based on electrically induced phase-transition, in series with a tunable resistance. We show that an IMT neuron has dynamics similar to a piecewise linear FitzHugh-Nagumo (FHN) neuron and incorporates all characteristics of a spiking neuron in the device phenomena. We experimentally demonstrate spontaneous stochastic spiking along with electrically controllable firing probabilities using Vanadium Dioxide (VO2_2) based IMT neurons which show a sigmoid-like transfer function. The stochastic spiking is explained by two noise sources - thermal noise and threshold fluctuations, which act as precursors of bifurcation. As such, the IMT neuron is modeled as an Ornstein-Uhlenbeck (OU) process with a fluctuating boundary resulting in transfer curves that closely match experiments. As one of the first comprehensive studies of a stochastic neuron hardware and its statistical properties, this article would enable efficient implementation of a large class of neuro-mimetic networks and algorithms.Comment: Added sectioning, Figure 6, Table 1, and Section II.E Updated abstract, discussion and corrected typo

    Stochastic IMT (insulator-metal-transition) neurons: An interplay of thermal and threshold noise at bifurcation

    Get PDF
    Artificial neural networks can harness stochasticity in multiple ways to enable a vast class of computationally powerful models. Electronic implementation of such stochastic networks is currently limited to addition of algorithmic noise to digital machines which is inherently inefficient; albeit recent efforts to harness physical noise in devices for stochasticity have shown promise. To succeed in fabricating electronic neuromorphic networks we need experimental evidence of devices with measurable and controllable stochasticity which is complemented with the development of reliable statistical models of such observed stochasticity. Current research literature has sparse evidence of the former and a complete lack of the latter. This motivates the current article where we demonstrate a stochastic neuron using an insulator-metal-transition (IMT) device, based on electrically induced phase-transition, in series with a tunable resistance. We show that an IMT neuron has dynamics similar to a piecewise linear FitzHugh-Nagumo (FHN) neuron and incorporates all characteristics of a spiking neuron in the device phenomena. We experimentally demonstrate spontaneous stochastic spiking along with electrically controllable firing probabilities using Vanadium Dioxide (VO2_2) based IMT neurons which show a sigmoid-like transfer function. The stochastic spiking is explained by two noise sources - thermal noise and threshold fluctuations, which act as precursors of bifurcation. As such, the IMT neuron is modeled as an Ornstein-Uhlenbeck (OU) process with a fluctuating boundary resulting in transfer curves that closely match experiments. As one of the first comprehensive studies of a stochastic neuron hardware and its statistical properties, this article would enable efficient implementation of a large class of neuro-mimetic networks and algorithms.Comment: Added sectioning, Figure 6, Table 1, and Section II.E Updated abstract, discussion and corrected typo

    Similarity networks for classification: a case study in the Horse Colic problem

    Get PDF
    This paper develops a two-layer neural network in which the neuron model computes a user-defined similarity function between inputs and weights. The neuron transfer function is formed by composition of an adapted logistic function with the mean of the partial input-weight similarities. The resulting neuron model is capable of dealing directly with variables of potentially different nature (continuous, fuzzy, ordinal, categorical). There is also provision for missing values. The network is trained using a two-stage procedure very similar to that used to train a radial basis function (RBF) neural network. The network is compared to two types of RBF networks in a non-trivial dataset: the Horse Colic problem, taken as a case study and analyzed in detail.Postprint (published version

    Switched-Current Chaotic Neurons

    Get PDF
    The Letter presents two nonlinear CMOS current-mode circuits that implement neuron soma equations for chaotic neural networks. They have been fabricated in a double-metal, single-poly 1.6µm CMOS technology. The neuron soma circuits use a novel, highly accurate CMOS circuit strategy to realise piecewise-linear characteristics in the current-mode domain. Their prototypes obtain reduced area and low voltage power supply (down to 3V) with a clock frequency of 500 kHz
    corecore