742 research outputs found

    Spiking Neural P Systems with Extended Rules

    Get PDF
    We consider spiking neural P systems with spiking rules allowed to introduce zero, one, or more spikes at the same time. The computing power of the obtained systems is investigated, when considering them as number generating and as language generating devices. In the first case, a simpler proof of universality is obtained (universality is already known for the restricted rules), while in the latter case we find characterizations of finite and recursively enumerable languages (without using any squeezing mechanism, as it was necessary in the case of restricted rules). The relationships with regular languages are also investigated. In the end of the paper, a tool-kit for computing (some) operations with languages is provided.Ministerio de Eduación y Ciencia TIN2005-09345-C04-0

    Computing with cells: membrane systems - some complexity issues.

    Full text link
    Membrane computing is a branch of natural computing which abstracts computing models from the structure and the functioning of the living cell. The main ingredients of membrane systems, called P systems, are (i) the membrane structure, which consists of a hierarchical arrangements of membranes which delimit compartments where (ii) multisets of symbols, called objects, evolve according to (iii) sets of rules which are localised and associated with compartments. By using the rules in a nondeterministic/deterministic maximally parallel manner, transitions between the system configurations can be obtained. A sequence of transitions is a computation of how the system is evolving. Various ways of controlling the transfer of objects from one membrane to another and applying the rules, as well as possibilities to dissolve, divide or create membranes have been studied. Membrane systems have a great potential for implementing massively concurrent systems in an efficient way that would allow us to solve currently intractable problems once future biotechnology gives way to a practical bio-realization. In this paper we survey some interesting and fundamental complexity issues such as universality vs. nonuniversality, determinism vs. nondeterminism, membrane and alphabet size hierarchies, characterizations of context-sensitive languages and other language classes and various notions of parallelism

    On spiking neural P systems

    Get PDF
    This work deals with several aspects concerning the formal verification of SN P systems and the computing power of some variants. A methodology based on the information given by the transition diagram associated with an SN P system is presented. The analysis of the diagram cycles codifies invariants formulae which enable us to establish the soundness and completeness of the system with respect to the problem it tries to resolve. We also study the universality of asynchronous and sequential SN P systems and the capability these models have to generate certain classes of languages. Further, by making a slight modification to the standard SN P systems, we introduce a new variant of SN P systems with a special I/O mode, called SN P modules, and study their computing power. It is demonstrated that, as string language acceptors and transducers, SN P modules can simulate several types of computing devices such as finite automata, a-finite transducers, and systolic trellis automata.Ministerio de Educación y Ciencia TIN2006-13425Junta de Andalucía TIC-58

    Intrinsic gain modulation and adaptive neural coding

    Get PDF
    In many cases, the computation of a neural system can be reduced to a receptive field, or a set of linear filters, and a thresholding function, or gain curve, which determines the firing probability; this is known as a linear/nonlinear model. In some forms of sensory adaptation, these linear filters and gain curve adjust very rapidly to changes in the variance of a randomly varying driving input. An apparently similar but previously unrelated issue is the observation of gain control by background noise in cortical neurons: the slope of the firing rate vs current (f-I) curve changes with the variance of background random input. Here, we show a direct correspondence between these two observations by relating variance-dependent changes in the gain of f-I curves to characteristics of the changing empirical linear/nonlinear model obtained by sampling. In the case that the underlying system is fixed, we derive relationships relating the change of the gain with respect to both mean and variance with the receptive fields derived from reverse correlation on a white noise stimulus. Using two conductance-based model neurons that display distinct gain modulation properties through a simple change in parameters, we show that coding properties of both these models quantitatively satisfy the predicted relationships. Our results describe how both variance-dependent gain modulation and adaptive neural computation result from intrinsic nonlinearity.Comment: 24 pages, 4 figures, 1 supporting informatio

    Spiking Neural P Systems. Recent Results, Research Topics

    Get PDF
    After a quick introduction of spiking neural P systems (a class of P systems inspired from the way neurons communicate by means of spikes, electrical impulses of identical shape), and presentation of typical results (in general equivalence with Turing machines as number computing devices, but also other issues, such as the possibility of handling strings or infinite sequences), we present a long list of open problems and research topics in this area, also mentioning recent attempts to address some of them. The bibliography completes the information offered to the reader interested in this research area.Ministerio de Educación y Ciencia TIN2006-13425Junta de Andalucía TIC-58

    Languages and P Systems: Recent Developments

    Get PDF
    Languages appeared from the very beginning in membrane computing, by their length sets or directly as sets of strings. We briefly recall here this relationship, with some details about certain recent developments. In particular, we discuss the possibility to associate a control word with a computation in a P system. An improvement of a result concerning the control words of spiking neural P systems is given: regular languages can be obtained as control words of such systems with only four neurons (and with usual extended rules: no more spikes are produces than consumed). Several research topics are pointed out.Junta de Andalucía P08 – TIC 0420

    Spiking neural P systems with extended rules: universality and languages

    Get PDF
    We consider spiking neural P systems with rules allowed to introduce zero, one, or more spikes at the same time. The motivation comes both from constructing small universal systems and from generating strings; previous results from these areas are briefly recalled. Then, the computing power of the obtained systems is investigated, when considering them as number generating and as language generating devices. In the first case, a simpler proof of universality is obtained, while in the latter case we find characterizations of finite and recursively enumerable languages (without using any squeezing mechanism, as it was necessary in the case of standard rules). The relationships with regular languages are also investigated.Ministerio de Educación y Ciencia TIN2005-09345-C03-01Junta de Andalucía TIC-58

    The equivalence of information-theoretic and likelihood-based methods for neural dimensionality reduction

    Get PDF
    Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron's probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as "single-spike information" to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex

    Function-Theoretic Explanation and the Search for Neural Mechanisms

    Get PDF
    A common kind of explanation in cognitive neuroscience might be called functiontheoretic: with some target cognitive capacity in view, the theorist hypothesizes that the system computes a well-defined function (in the mathematical sense) and explains how computing this function constitutes (in the system’s normal environment) the exercise of the cognitive capacity. Recently, proponents of the so-called ‘new mechanist’ approach in philosophy of science have argued that a model of a cognitive capacity is explanatory only to the extent that it reveals the causal structure of the mechanism underlying the capacity. If they are right, then a cognitive model that resists a transparent mapping to known neural mechanisms fails to be explanatory. I argue that a functiontheoretic characterization of a cognitive capacity can be genuinely explanatory even absent an account of how the capacity is realized in neural hardware
    • …
    corecore