4,795 research outputs found

    Asynchronous spiking neural P systems

    Get PDF
    We consider here spiking neural P systems with a non-synchronized (i.e., asynchronous) use of rules: in any step, a neuron can apply or not apply its rules which are enabled by the number of spikes it contains (further spikes can come, thus changing the rules enabled in the next step). Because the time between two firings of the output neuron is now irrelevant, the result of a computation is the number of spikes sent out by the system, not the distance between certain spikes leaving the system. The additional non-determinism introduced in the functioning of the system by the non-synchronization is proved not to decrease the computing power in the case of using extended rules (several spikes can be produced by a rule). That is, we obtain again the equivalence with Turing machines (interpreted as generators of sets of (vectors of) numbers). However, this problem remains open for the case of standard spiking neural P systems, whose rules can only produce one spike. On the other hand we prove that asynchronous systems, with extended rules, and where each neuron is either bounded or unbounded, are not computationally complete. For these systems, the configuration reachability, membership (in terms of generated vectors), emptiness, infiniteness, and disjointness problems are shown to be decidable. However, containment and equivalence are undecidable. © 2009 Elsevier B.V. All rights reserved

    Asynchronous Spiking Neural P Systems with Local Synchronization

    Get PDF
    Summary. Spiking neural P systems (SN P systems, for short) are a class of distributed parallel computing devices inspired from the way neurons communicate by means of spikes. Asynchronous SN P systems are non-synchronized systems, where the use of spiking rules (even if they are enabled by the contents of neurons) is not obligatory. In this paper, with a biological inspiration (in order to achieve some specific biological functioning, neurons from the same functioning motif or community work synchronously to cooperate with each other), we introduce the notion of local synchronization into asynchronous SN P systems. The computation power of asynchronous SN P systems with local synchronization is investigated. Such systems consisting of general neurons (resp. unbounded neurons) and using standard spiking rules are proved to be universal. Asynchronous SN P systems with local synchronization consisting of bounded neurons and using standard spiking rules characterize the semilinear sets of natural numbers. These results show that the local synchronization is useful, it provides some “programming capacity ” useful for achieving a desired computational power.

    Asynchronous Spiking Neural P Systems with Local Synchronization

    Get PDF
    Spiking neural P systems (SN P systems, for short) are a class of distributed parallel computing devices inspired from the way neurons communicate by means of spikes. Asynchronous SN P systems are non-synchronized systems, where the use of spik- ing rules (even if they are enabled by the contents of neurons) is not obligatory. In this paper, with a biological inspiration (in order to achieve some speci c biological func- tioning, neurons from the same functioning motif or community work synchronously to cooperate with each other), we introduce the notion of local synchronization into asyn- chronous SN P systems. The computation power of asynchronous SN P systems with local synchronization is investigated. Such systems consisting of general neurons (resp. unbounded neurons) and using standard spiking rules are proved to be universal. Asyn- chronous SN P systems with local synchronization consisting of bounded neurons and using standard spiking rules characterize the semilinear sets of natural numbers. These results show that the local synchronization is useful, it provides some \programming capacity" useful for achieving a desired computational power.Junta de Andalucía P08 – TIC 0420

    Asynchronous Spiking Neural P Systems with Structural Plasticity

    Get PDF
    Spiking neural P (in short, SNP) systems are computing devices inspired by biological spiking neurons. In this work we consider SNP systems with structural plasticity (in short, SNPSP systems) working in the asynchronous (in short, asyn mode). SNPSP systems represent a class of SNP systems that have dynamic synapses, i.e. neurons can use plasticity rules to create or remove synapses. We prove that for asyn mode, bounded SNPSP systems (where any neuron produces at most one spike each step) are not universal, while unbounded SNPSP systems with weighted synapses (a weight associated with each synapse allows a neuron to produce more than one spike each step) are universal. The latter systems are similar to SNP systems with extended rules in asyn mode (known to be universal) while the former are similar to SNP systems with standard rules only in asyn mode (conjectured not to be universal). Our results thus provide support to the conjecture of the still open problem.Ministerio de Economía y Competitividad TIN2012-3743

    Asynchronous Spiking Neural P Systems with Multiple Channels and Symbols

    Get PDF
    Spiking neural P systems (SNP systems, in short) are a class of distributed parallel computation systems, inspired from the way that the neurons process and communicate information by means of spikes. A new variant of SNP systems, which works in asynchronous mode, asynchronous spiking neural P systems with multiple channels and symbols (ASNP-MCS systems, in short), is investigated in this paper. There are two interesting features in ASNP-MCS systems: multiple channels and multiple symbols. That is, every neuron has more than one synaptic channels to connect its subsequent neurons, and every neuron can deal with more than one type of spikes. The variant works in asynchronous mode: in every step, each neuron can be free to fire or not when its rules can be applied. The computational completeness of ASNP-MCS systems is investigated. It is proved that ASNP-MCS systems as number generating and accepting devices are Turing universal. Moreover, we obtain a small universal function computing device that is an ASNP-MCS system with 67 neurons. Specially, a new idea that can solve ``block'' problems is proposed in INPUT modules

    Spiking Neural P Systems

    Get PDF
    Spiking neural P systems are a class of distributed and parallel computing models inspired by the neurophysiological behavior of neurons sending electrical impulses (spikes) along axons to other neurons. In this thesis, we consider that the spiking neural P systems are universal even if the systems work in limited asynchronous mode. And we also investigated different variants of spiking neural P systems with other additional features, such as the axon functioning, the growth of dendritic trees in neurons, the positive or negative weights on synapses, and the astrocytes having excitatory and inhibitory influence on synapses.UBL - phd migration 201

    Asynchronous spiking neurons, the natural key to exploit temporal sparsity

    Get PDF
    Inference of Deep Neural Networks for stream signal (Video/Audio) processing in edge devices is still challenging. Unlike the most state of the art inference engines which are efficient for static signals, our brain is optimized for real-time dynamic signal processing. We believe one important feature of the brain (asynchronous state-full processing) is the key to its excellence in this domain. In this work, we show how asynchronous processing with state-full neurons allows exploitation of the existing sparsity in natural signals. This paper explains three different types of sparsity and proposes an inference algorithm which exploits all types of sparsities in the execution of already trained networks. Our experiments in three different applications (Handwritten digit recognition, Autonomous Steering and Hand-Gesture recognition) show that this model of inference reduces the number of required operations for sparse input data by a factor of one to two orders of magnitudes. Additionally, due to fully asynchronous processing this type of inference can be run on fully distributed and scalable neuromorphic hardware platforms

    Counting to Ten with Two Fingers: Compressed Counting with Spiking Neurons

    Get PDF
    We consider the task of measuring time with probabilistic threshold gates implemented by bio-inspired spiking neurons. In the model of spiking neural networks, network evolves in discrete rounds, where in each round, neurons fire in pulses in response to a sufficiently high membrane potential. This potential is induced by spikes from neighboring neurons that fired in the previous round, which can have either an excitatory or inhibitory effect. Discovering the underlying mechanisms by which the brain perceives the duration of time is one of the largest open enigma in computational neuro-science. To gain a better algorithmic understanding onto these processes, we introduce the neural timer problem. In this problem, one is given a time parameter t, an input neuron x, and an output neuron y. It is then required to design a minimum sized neural network (measured by the number of auxiliary neurons) in which every spike from x in a given round i, makes the output y fire for the subsequent t consecutive rounds. We first consider a deterministic implementation of a neural timer and show that Theta(log t) (deterministic) threshold gates are both sufficient and necessary. This raised the question of whether randomness can be leveraged to reduce the number of neurons. We answer this question in the affirmative by considering neural timers with spiking neurons where the neuron y is required to fire for t consecutive rounds with probability at least 1-delta, and should stop firing after at most 2t rounds with probability 1-delta for some input parameter delta in (0,1). Our key result is a construction of a neural timer with O(log log 1/delta) spiking neurons. Interestingly, this construction uses only one spiking neuron, while the remaining neurons can be deterministic threshold gates. We complement this construction with a matching lower bound of Omega(min{log log 1/delta, log t}) neurons. This provides the first separation between deterministic and randomized constructions in the setting of spiking neural networks. Finally, we demonstrate the usefulness of compressed counting networks for synchronizing neural networks. In the spirit of distributed synchronizers [Awerbuch-Peleg, FOCS\u2790], we provide a general transformation (or simulation) that can take any synchronized network solution and simulate it in an asynchronous setting (where edges have arbitrary response latencies) while incurring a small overhead w.r.t the number of neurons and computation time

    Memory and information processing in neuromorphic systems

    Full text link
    A striking difference between brain-inspired neuromorphic processors and current von Neumann processors architectures is the way in which memory and processing is organized. As Information and Communication Technologies continue to address the need for increased computational power through the increase of cores within a digital processor, neuromorphic engineers and scientists can complement this need by building processor architectures where memory is distributed with the processing. In this paper we present a survey of brain-inspired processor architectures that support models of cortical networks and deep neural networks. These architectures range from serial clocked implementations of multi-neuron systems to massively parallel asynchronous ones and from purely digital systems to mixed analog/digital systems which implement more biological-like models of neurons and synapses together with a suite of adaptation and learning mechanisms analogous to the ones found in biological nervous systems. We describe the advantages of the different approaches being pursued and present the challenges that need to be addressed for building artificial neural processing systems that can display the richness of behaviors seen in biological systems.Comment: Submitted to Proceedings of IEEE, review of recently proposed neuromorphic computing platforms and system
    corecore