593 research outputs found
Networks of polarized evolutionary processors are computationally complete
ABSTRACT
In this paper, we consider the computational power of a new variant of networks of evolutionary processors which seems to be more suitable for a software and hardware implementation. Each processor as well as the data navigating throughout the network are now considered to be polarized. While the polarization of every processor is predefined, the data polarization is dynamically computed by means of a valuation mapping. Consequently, the protocol of communication is naturally defined by means of this polarization. We show that tag systems can be simulated by these networks with a constant number of nodes, while Turing machines can be simulated, in a time-efficient way, by these networks with a number of nodes depending linearly on the tape alphabet of the Turing machine
(Tissue) P Systems with Vesicles of Multisets
We consider tissue P systems working on vesicles of multisets with the very
simple operations of insertion, deletion, and substitution of single objects.
With the whole multiset being enclosed in a vesicle, sending it to a target
cell can be indicated in those simple rules working on the multiset. As
derivation modes we consider the sequential mode, where exactly one rule is
applied in a derivation step, and the set maximal mode, where in each
derivation step a non-extendable set of rules is applied. With the set maximal
mode, computational completeness can already be obtained with tissue P systems
having a tree structure, whereas tissue P systems even with an arbitrary
communication structure are not computationally complete when working in the
sequential mode. Adding polarizations (-1, 0, 1 are sufficient) allows for
obtaining computational completeness even for tissue P systems working in the
sequential mode.Comment: In Proceedings AFL 2017, arXiv:1708.0622
(Tissue) P Systems with Vesicles of Multisets
We consider tissue P systems working on vesicles of multisets with the very
simple operations of insertion, deletion, and substitution of single objects.
With the whole multiset being enclosed in a vesicle, sending it to a target
cell can be indicated in those simple rules working on the multiset. As
derivation modes we consider the sequential mode, where exactly one rule is
applied in a derivation step, and the set maximal mode, where in each
derivation step a non-extendable set of rules is applied. With the set maximal
mode, computational completeness can already be obtained with tissue P systems
having a tree structure, whereas tissue P systems even with an arbitrary
communication structure are not computationally complete when working in the
sequential mode. Adding polarizations (-1, 0, 1 are sufficient) allows for
obtaining computational completeness even for tissue P systems working in the
sequential mode.Comment: In Proceedings AFL 2017, arXiv:1708.0622
An Architecture forRepresenting Biological Processes based on Networks of Bio-inspired Processors
n this paper we propose the use of Networks of Bio-inspired Processors (NBP) to model some biological phenomena within a computational framework. In particular, we propose the use of an extension of NBP named Network Evolutionary Processors Transducers to simulate chemical transformations of substances. Within a biological process, chemical transformations of substances are basic operations in the change of the state of the cell. Previously, it has been proved that NBP are computationally complete, that is, they are able to solve NP complete problems in linear time, using massively parallel computations. In addition, we propose a multilayer architecture that will allow us to design models of biological processes related to cellular communication as well as their implications in the metabolic pathways. Subsequently, these models can be applied not only to biological-cellular instances but, possibly, also to configure instances of interactive processes in many other fields like population interactions, ecological trophic networks, in dustrial ecosystems, etc
Networks of picture processors
Abstract
The goal of this work is to survey in a systematic and uniform way the main results regarding different computational aspects of networks of picture processors viewed as rectangular picture accepting devices. We first consider networks with evolutionary picture processors only and discuss their computational power as well as a partial solution to the picture matching problem.
Two variants of these networks, which are differentiated by the protocol of communication, are also surveyed: networks with filtered connections and networks with polarized processors. Then we consider networks having both types of processors, i.e., evolutionary processors and hiding
processors, and provide a complete solution to the picture matching problem. Several results which follow from this solution are then presented. Finally we discuss some possible directions for further research
Generating networks of genetic processors
[EN] The Networks of Genetic Processors (NGPs) are non-conventional models of computation based on genetic operations over strings, namely mutation and crossover operations as it was established in genetic algorithms. Initially, they have been proposed as acceptor machines which are decision problem solvers. In that case, it has been shown that they are universal computing models equivalent to Turing machines. In this work, we propose NGPs as enumeration devices and we analyze their computational power. First, we define the model and we propose its definition as parallel genetic algorithms. Once the correspondence between the two formalisms has been established, we carry out a study of the generation capacity of the NGPs under the research framework of the theory of formal languages. We investigate the relationships between the number of processors of the model and its generative power. Our results show that the number of processors is important to increase the generative capability of the model up to an upper bound, and that NGPs are universal models of computation if they are formulated as generation devices. This allows us to affirm that parallel genetic algorithms working under certain restrictions can be considered equivalent to Turing machines and, therefore, they are universal models of computation.This research was partially supported by TAILOR, a project funded by EU Horizon 2020 research and innovation programme under GA No 952215.Campos Frances, M.; Sempere Luna, JM. (2022). Generating networks of genetic processors. Genetic Programming and Evolvable Machines. 23(1):133-155. https://doi.org/10.1007/s10710-021-09423-713315523
Recommended from our members
Toward Fast and Reliable Potential Energy Surfaces for Metallic Pt Clusters by Hierarchical Delta Neural Networks.
Data-driven machine learning force fields (MLFs) are more and more popular in atomistic simulations and exploit machine learning methods to predict energies and forces for unknown structures based on the knowledge learned from an existing reference database. The latter usually comes from density functional theory calculations. One main drawback of MLFs is that physical laws are not incorporated in the machine learning models, and instead, MLFs are designed to be very flexible to simulate complex quantum chemistry potential energy surface (PES). In general, MLFs have poor transferability, and hence, a very large trainset is required to span all the target feature space to get a reliable MLF. This procedure becomes more troublesome when the PES is complicated, with a large number of degrees of freedom, in which building a large database is inevitable and very expensive, especially when accurate but costly exchange-correlation functionals have to be used. In this manuscript, we exploit a high-dimensional neural network potential (HDNNP) on Pt clusters of sizes from 6 to 20 as one example. Our standard level of energy calculation is DFT GGA (PBE) using a plane wave basis set. We introduce an approximate but fast level with the PBE functional and a minimal atomic orbital basis set, and then, a more accurate but expensive level, using a hybrid functional or nonlocal vdW functional and a plane wave basis set, is reliably predicted by learning the difference with HDNNP. The results show that such a differential approach (named ΔHDNNP) can deliver very accurate predictions (error <10 meV/atom) in reference to converged basis set energies as well as more accurate but expensive xc functionals. The overall speedup can be as large as 900 for a 20 atom Pt cluster. More importantly, ΔHDNNP shows much better transferability due to the intrinsic smoothness of the delta potential energy surface, and accordingly, one can use much smaller trainset data to obtain better accuracy than the conventional HDNNP. A multilayer ΔHDNNP is thus proposed to obtain very accurate predictions versus expensive nonlocal vdW functional calculations in which the required trainset is further reduced. The approach can be easily generalized to any other machine learning methods and opens a path to study the structure and dynamics of Pt clusters and nanoparticles
- …