3,411 research outputs found

    Spiking Neural P Systems: A Short Introduction and New Normal Forms

    Get PDF
    Spiking neural P systems are a class of P systems inspired from the way the neurons communicate with each other by means of electrical impulses (called \spikes"). In the few years since this model was introduced, many results related to the computing power and e ciency of these computing devices were reported. The present paper quickly surveys the basic ideas of this research area and the basic results, then, as typical proofs about the universality of spiking neural P systems, we present some new normal forms for them. Speci cally, we consider a natural restriction in the architecture of a spiking neural P system, to have neurons of a small number of types (i.e., using a small number of sets of rules). We prove that three types of neurons are su cient in order to generate each recursively enumerable set of numbers as the distance between the rst two spikes emitted by the system; the problem remains open for accepting SN P systems. The paper ends with the complete bibliography of this domain, at the level of April 2009.Ministerio de Educación y Ciencia TIN2006-13452Junta de Andalucía P08-TIC-0420

    Spiking Neural P Systems. Recent Results, Research Topics

    Get PDF
    After a quick introduction of spiking neural P systems (a class of P systems inspired from the way neurons communicate by means of spikes, electrical impulses of identical shape), and presentation of typical results (in general equivalence with Turing machines as number computing devices, but also other issues, such as the possibility of handling strings or infinite sequences), we present a long list of open problems and research topics in this area, also mentioning recent attempts to address some of them. The bibliography completes the information offered to the reader interested in this research area.Ministerio de Educación y Ciencia TIN2006-13425Junta de Andalucía TIC-58

    The Utility of Phase Models in Studying Neural Synchronization

    Full text link
    Synchronized neural spiking is associated with many cognitive functions and thus, merits study for its own sake. The analysis of neural synchronization naturally leads to the study of repetitive spiking and consequently to the analysis of coupled neural oscillators. Coupled oscillator theory thus informs the synchronization of spiking neuronal networks. A crucial aspect of coupled oscillator theory is the phase response curve (PRC), which describes the impact of a perturbation to the phase of an oscillator. In neural terms, the perturbation represents an incoming synaptic potential which may either advance or retard the timing of the next spike. The phase response curves and the form of coupling between reciprocally coupled oscillators defines the phase interaction function, which in turn predicts the synchronization outcome (in-phase versus anti-phase) and the rate of convergence. We review the two classes of PRC and demonstrate the utility of the phase model in predicting synchronization in reciprocally coupled neural models. In addition, we compare the rate of convergence for all combinations of reciprocally coupled Class I and Class II oscillators. These findings predict the general synchronization outcomes of broad classes of neurons under both inhibitory and excitatory reciprocal coupling.Comment: 18 pages, 5 figure

    Topological exploration of artificial neuronal network dynamics

    Full text link
    One of the paramount challenges in neuroscience is to understand the dynamics of individual neurons and how they give rise to network dynamics when interconnected. Historically, researchers have resorted to graph theory, statistics, and statistical mechanics to describe the spatiotemporal structure of such network dynamics. Our novel approach employs tools from algebraic topology to characterize the global properties of network structure and dynamics. We propose a method based on persistent homology to automatically classify network dynamics using topological features of spaces built from various spike-train distances. We investigate the efficacy of our method by simulating activity in three small artificial neural networks with different sets of parameters, giving rise to dynamics that can be classified into four regimes. We then compute three measures of spike train similarity and use persistent homology to extract topological features that are fundamentally different from those used in traditional methods. Our results show that a machine learning classifier trained on these features can accurately predict the regime of the network it was trained on and also generalize to other networks that were not presented during training. Moreover, we demonstrate that using features extracted from multiple spike-train distances systematically improves the performance of our method

    Models wagging the dog: are circuits constructed with disparate parameters?

    Get PDF
    In a recent article, Prinz, Bucher, and Marder (2004) addressed the fundamental question of whether neural systems are built with a fixed blueprint of tightly controlled parameters or in a way in which properties can vary largely from one individual to another, using a database modeling approach. Here, we examine the main conclusion that neural circuits indeed are built with largely varying parameters in the light of our own experimental and modeling observations. We critically discuss the experimental and theoretical evidence, including the general adequacy of database approaches for questions of this kind, and come to the conclusion that the last word for this fundamental question has not yet been spoken

    Dimensions of Neural-symbolic Integration - A Structured Survey

    Full text link
    Research on integrated neural-symbolic systems has made significant progress in the recent past. In particular the understanding of ways to deal with symbolic knowledge within connectionist systems (also called artificial neural networks) has reached a critical mass which enables the community to strive for applicable implementations and use cases. Recent work has covered a great variety of logics used in artificial intelligence and provides a multitude of techniques for dealing with them within the context of artificial neural networks. We present a comprehensive survey of the field of neural-symbolic integration, including a new classification of system according to their architectures and abilities.Comment: 28 page
    corecore