9,811 research outputs found
Computing with cells: membrane systems - some complexity issues.
Membrane computing is a branch of natural computing which abstracts computing models from the structure and the functioning of the living cell. The main ingredients of membrane systems, called P systems, are (i) the membrane structure, which consists of a hierarchical arrangements of membranes which delimit compartments where (ii) multisets of symbols, called objects, evolve according to (iii) sets of rules which are localised and associated with compartments. By using the rules in a nondeterministic/deterministic maximally parallel manner, transitions between the system configurations can be obtained. A sequence of transitions is a computation of how the system is evolving. Various ways of controlling the transfer of objects from one membrane to another and applying the rules, as well as possibilities to dissolve, divide or create membranes have been studied. Membrane systems have a great potential for implementing massively concurrent systems in an efficient way that would allow us to solve currently intractable problems once future biotechnology gives way to a practical bio-realization. In this paper we survey some interesting and fundamental complexity issues such as universality vs. nonuniversality, determinism vs. nondeterminism, membrane and alphabet size hierarchies, characterizations of context-sensitive languages and other language classes and various notions of parallelism
Computations on Nondeterministic Cellular Automata
The work is concerned with the trade-offs between the dimension and the time
and space complexity of computations on nondeterministic cellular automata. It
is proved, that
1). Every NCA \Cal A of dimension , computing a predicate with time
complexity T(n) and space complexity S(n) can be simulated by -dimensional
NCA with time and space complexity and
by -dimensional NCA with time and space complexity .
2) For any predicate and integer if \Cal A is a fastest
-dimensional NCA computing with time complexity T(n) and space
complexity S(n), then .
3). If is time complexity of a fastest -dimensional NCA
computing predicate then T_{r+1,P} &=O((T_{r,P})^{1-r/(r+1)^2}),
T_{r-1,P} &=O((T_{r,P})^{1+2/r}). Similar problems for deterministic CA are
discussed.Comment: 18 pages in AmsTex, 3 figures in PostScrip
Linear Bounded Composition of Tree-Walking Tree Transducers: Linear Size Increase and Complexity
Compositions of tree-walking tree transducers form a hierarchy with respect
to the number of transducers in the composition. As main technical result it is
proved that any such composition can be realized as a linear bounded
composition, which means that the sizes of the intermediate results can be
chosen to be at most linear in the size of the output tree. This has
consequences for the expressiveness and complexity of the translations in the
hierarchy. First, if the computed translation is a function of linear size
increase, i.e., the size of the output tree is at most linear in the size of
the input tree, then it can be realized by just one, deterministic,
tree-walking tree transducer. For compositions of deterministic transducers it
is decidable whether or not the translation is of linear size increase. Second,
every composition of deterministic transducers can be computed in deterministic
linear time on a RAM and in deterministic linear space on a Turing machine,
measured in the sum of the sizes of the input and output tree. Similarly, every
composition of nondeterministic transducers can be computed in simultaneous
polynomial time and linear space on a nondeterministic Turing machine. Their
output tree languages are deterministic context-sensitive, i.e., can be
recognized in deterministic linear space on a Turing machine. The membership
problem for compositions of nondeterministic translations is nondeterministic
polynomial time and deterministic linear space. The membership problem for the
composition of a nondeterministic and a deterministic tree-walking tree
translation (for a nondeterministic IO macro tree translation) is log-space
reducible to a context-free language, whereas the membership problem for the
composition of a deterministic and a nondeterministic tree-walking tree
translation (for a nondeterministic OI macro tree translation) is possibly
NP-complete
Epistemic virtues, metavirtues, and computational complexity
I argue that considerations about computational complexity show that all finite agents need characteristics like those that have been called epistemic virtues. The necessity of these virtues follows in part from the nonexistence of shortcuts, or efficient ways of finding shortcuts, to cognitively expensive routines. It follows that agents must possess the capacities â metavirtues âof developing in advance the cognitive virtues they will need when time and memory are at a premium
Towards a complexity theory for the congested clique
The congested clique model of distributed computing has been receiving
attention as a model for densely connected distributed systems. While there has
been significant progress on the side of upper bounds, we have very little in
terms of lower bounds for the congested clique; indeed, it is now know that
proving explicit congested clique lower bounds is as difficult as proving
circuit lower bounds.
In this work, we use various more traditional complexity-theoretic tools to
build a clearer picture of the complexity landscape of the congested clique:
-- Nondeterminism and beyond: We introduce the nondeterministic congested
clique model (analogous to NP) and show that there is a natural canonical
problem family that captures all problems solvable in constant time with
nondeterministic algorithms. We further generalise these notions by introducing
the constant-round decision hierarchy (analogous to the polynomial hierarchy).
-- Non-constructive lower bounds: We lift the prior non-uniform counting
arguments to a general technique for proving non-constructive uniform lower
bounds for the congested clique. In particular, we prove a time hierarchy
theorem for the congested clique, showing that there are decision problems of
essentially all complexities, both in the deterministic and nondeterministic
settings.
-- Fine-grained complexity: We map out relationships between various natural
problems in the congested clique model, arguing that a reduction-based
complexity theory currently gives us a fairly good picture of the complexity
landscape of the congested clique
Pseudorandomness for Approximate Counting and Sampling
We study computational procedures that use both randomness and nondeterminism. The goal of this paper is to derandomize such procedures under the weakest possible assumptions.
Our main technical contribution allows one to âboostâ a given hardness assumption: We show that if there is a problem in EXP that cannot be computed by poly-size nondeterministic circuits then there is one which cannot be computed by poly-size circuits that make non-adaptive NP oracle queries. This in particular shows that the various assumptions used over the last few years by several authors to derandomize Arthur-Merlin games (i.e., show AM = NP) are in fact all equivalent.
We also define two new primitives that we regard as the natural pseudorandom objects associated with approximate counting and sampling of NP-witnesses. We use the âboostingâ theorem and hashing techniques to construct these primitives using an assumption that is no stronger than that used to derandomize AM.
We observe that Cai's proof that S_2^P â PPâ(NP) and the learning algorithm of Bshouty et al. can be seen as reductions to sampling that are not probabilistic. As a consequence they can be derandomized under an assumption which is weaker than the assumption that was previously known to suffice
Multi-Head Finite Automata: Characterizations, Concepts and Open Problems
Multi-head finite automata were introduced in (Rabin, 1964) and (Rosenberg,
1966). Since that time, a vast literature on computational and descriptional
complexity issues on multi-head finite automata documenting the importance of
these devices has been developed. Although multi-head finite automata are a
simple concept, their computational behavior can be already very complex and
leads to undecidable or even non-semi-decidable problems on these devices such
as, for example, emptiness, finiteness, universality, equivalence, etc. These
strong negative results trigger the study of subclasses and alternative
characterizations of multi-head finite automata for a better understanding of
the nature of non-recursive trade-offs and, thus, the borderline between
decidable and undecidable problems. In the present paper, we tour a fragment of
this literature
- âŠ