7,106 research outputs found

    An Online Unsupervised Structural Plasticity Algorithm for Spiking Neural Networks

    Full text link
    In this article, we propose a novel Winner-Take-All (WTA) architecture employing neurons with nonlinear dendrites and an online unsupervised structural plasticity rule for training it. Further, to aid hardware implementations, our network employs only binary synapses. The proposed learning rule is inspired by spike time dependent plasticity (STDP) but differs for each dendrite based on its activation level. It trains the WTA network through formation and elimination of connections between inputs and synapses. To demonstrate the performance of the proposed network and learning rule, we employ it to solve two, four and six class classification of random Poisson spike time inputs. The results indicate that by proper tuning of the inhibitory time constant of the WTA, a trade-off between specificity and sensitivity of the network can be achieved. We use the inhibitory time constant to set the number of subpatterns per pattern we want to detect. We show that while the percentage of successful trials are 92%, 88% and 82% for two, four and six class classification when no pattern subdivisions are made, it increases to 100% when each pattern is subdivided into 5 or 10 subpatterns. However, the former scenario of no pattern subdivision is more jitter resilient than the later ones.Comment: 11 pages, 10 figures, journa

    Liquid State Machine with Dendritically Enhanced Readout for Low-power, Neuromorphic VLSI Implementations

    Full text link
    In this paper, we describe a new neuro-inspired, hardware-friendly readout stage for the liquid state machine (LSM), a popular model for reservoir computing. Compared to the parallel perceptron architecture trained by the p-delta algorithm, which is the state of the art in terms of performance of readout stages, our readout architecture and learning algorithm can attain better performance with significantly less synaptic resources making it attractive for VLSI implementation. Inspired by the nonlinear properties of dendrites in biological neurons, our readout stage incorporates neurons having multiple dendrites with a lumped nonlinearity. The number of synaptic connections on each branch is significantly lower than the total number of connections from the liquid neurons and the learning algorithm tries to find the best 'combination' of input connections on each branch to reduce the error. Hence, the learning involves network rewiring (NRW) of the readout network similar to structural plasticity observed in its biological counterparts. We show that compared to a single perceptron using analog weights, this architecture for the readout can attain, even by using the same number of binary valued synapses, up to 3.3 times less error for a two-class spike train classification problem and 2.4 times less error for an input rate approximation task. Even with 60 times larger synapses, a group of 60 parallel perceptrons cannot attain the performance of the proposed dendritically enhanced readout. An additional advantage of this method for hardware implementations is that the 'choice' of connectivity can be easily implemented exploiting address event representation (AER) protocols commonly used in current neuromorphic systems where the connection matrix is stored in memory. Also, due to the use of binary synapses, our proposed method is more robust against statistical variations.Comment: 14 pages, 19 figures, Journa

    Magnetic fields in nearby normal galaxies: Energy equipartition

    Full text link
    We present maps of total magnetic field using 'equipartition' assumptions for five nearby normal galaxies at sub-kpc spatial resolution. The mean magnetic field is found to be ~11 \mu G. The field is strongest near the central regions where mean values are ~20--25 \mu G and falls to ~15 \mu G in disk and ~10 \mu G in the outer parts. There is little variation in the field strength between arm and interarm regions, such that, in the interarms, the field is < 20 percent weaker than in the arms. There is no indication of variation in magnetic field as one moves along arm or interarm after correcting for the radial variation of magnetic field. We also studied the energy densities in gaseous and ionized phases of the interstellar medium and compared to the energy density in the magnetic field. The energy density in the magnetic field was found to be similar to that of the gas within a factor of <2 at sub-kpc scales in the arms, and thus magnetic field plays an important role in pressure balance of the interstellar medium. Magnetic field energy density is seen to dominate over the kinetic energy density of gas in the interarm regions and outer parts of the galaxies and thereby helps in maintaining the large scale ordered fields seen in those regions.Comment: 12 Pages, 6 Figures, Accepted to be published in MNRA

    Magnus Force in High Temperature Superconductivity and Berry Phase

    Full text link
    In the topological framework of high temperature superconductivity we have discussed the Magnus force acting on its vortices

    An Online Structural Plasticity Rule for Generating Better Reservoirs

    Full text link
    In this article, a novel neuro-inspired low-resolution online unsupervised learning rule is proposed to train the reservoir or liquid of Liquid State Machine. The liquid is a sparsely interconnected huge recurrent network of spiking neurons. The proposed learning rule is inspired from structural plasticity and trains the liquid through formation and elimination of synaptic connections. Hence, the learning involves rewiring of the reservoir connections similar to structural plasticity observed in biological neural networks. The network connections can be stored as a connection matrix and updated in memory by using Address Event Representation (AER) protocols which are generally employed in neuromorphic systems. On investigating the 'pairwise separation property' we find that trained liquids provide 1.36 ±\pm 0.18 times more inter-class separation while retaining similar intra-class separation as compared to random liquids. Moreover, analysis of the 'linear separation property' reveals that trained liquids are 2.05 ±\pm 0.27 times better than random liquids. Furthermore, we show that our liquids are able to retain the 'generalization' ability and 'generality' of random liquids. A memory analysis shows that trained liquids have 83.67 ±\pm 5.79 ms longer fading memory than random liquids which have shown 92.8 ±\pm 5.03 ms fading memory for a particular type of spike train inputs. We also throw some light on the dynamics of the evolution of recurrent connections within the liquid. Moreover, compared to 'Separation Driven Synaptic Modification' - a recently proposed algorithm for iteratively refining reservoirs, our learning rule provides 9.30%, 15.21% and 12.52% more liquid separations and 2.8%, 9.1% and 7.9% better classification accuracies for four, eight and twelve class pattern recognition tasks respectively.Comment: 45 pages, 13 figures, journa
    corecore