608 research outputs found
On the possible Computational Power of the Human Mind
The aim of this paper is to address the question: Can an artificial neural
network (ANN) model be used as a possible characterization of the power of the
human mind? We will discuss what might be the relationship between such a model
and its natural counterpart. A possible characterization of the different power
capabilities of the mind is suggested in terms of the information contained (in
its computational complexity) or achievable by it. Such characterization takes
advantage of recent results based on natural neural networks (NNN) and the
computational power of arbitrary artificial neural networks (ANN). The possible
acceptance of neural networks as the model of the human mind's operation makes
the aforementioned quite relevant.Comment: Complexity, Science and Society Conference, 2005, University of
Liverpool, UK. 23 page
A Survey on Continuous Time Computations
We provide an overview of theories of continuous time computation. These
theories allow us to understand both the hardness of questions related to
continuous time dynamical systems and the computational power of continuous
time analog models. We survey the existing models, summarizing results, and
point to relevant references in the literature
Turing machines can be efficiently simulated by the General Purpose Analog Computer
The Church-Turing thesis states that any sufficiently powerful computational
model which captures the notion of algorithm is computationally equivalent to
the Turing machine. This equivalence usually holds both at a computability
level and at a computational complexity level modulo polynomial reductions.
However, the situation is less clear in what concerns models of computation
using real numbers, and no analog of the Church-Turing thesis exists for this
case. Recently it was shown that some models of computation with real numbers
were equivalent from a computability perspective. In particular it was shown
that Shannon's General Purpose Analog Computer (GPAC) is equivalent to
Computable Analysis. However, little is known about what happens at a
computational complexity level. In this paper we shed some light on the
connections between this two models, from a computational complexity level, by
showing that, modulo polynomial reductions, computations of Turing machines can
be simulated by GPACs, without the need of using more (space) resources than
those used in the original Turing computation, as long as we are talking about
bounded computations. In other words, computations done by the GPAC are as
space-efficient as computations done in the context of Computable Analysis
Most Programs Stop Quickly or Never Halt
Since many real-world problems arising in the fields of compiler
optimisation, automated software engineering, formal proof systems, and so
forth are equivalent to the Halting Problem--the most notorious undecidable
problem--there is a growing interest, not only academically, in understanding
the problem better and in providing alternative solutions. Halting computations
can be recognised by simply running them; the main difficulty is to detect
non-halting programs. Our approach is to have the probability space extend over
both space and time and to consider the probability that a random -bit
program has halted by a random time. We postulate an a priori computable
probability distribution on all possible runtimes and we prove that given an
integer k>0, we can effectively compute a time bound T such that the
probability that an N-bit program will eventually halt given that it has not
halted by T is smaller than 2^{-k}. We also show that the set of halting
programs (which is computably enumerable, but not computable) can be written as
a disjoint union of a computable set and a set of effectively vanishing
probability. Finally, we show that ``long'' runtimes are effectively rare. More
formally, the set of times at which an N-bit program can stop after the time
2^{N+constant} has effectively zero density.Comment: Shortened abstract and changed format of references to match Adv.
Appl. Math guideline
P Systems: from Anti-Matter to Anti-Rules
The concept of a matter object being annihilated when meeting its corresponding
anti-matter object is taken over for rule labels as objects and anti-rule labels
as the corresponding annihilation counterpart in P systems. In the presence of a corresponding
anti-rule object, annihilation of a rule object happens before the rule that the
rule object represents, can be applied. Applying a rule consumes the corresponding rule
object, but may also produce new rule objects as well as anti-rule objects, too. Computational
completeness in this setting then can be obtained in a one-membrane P system
with non-cooperative rules and rule / anti-rule annihilation rules when using one of the
standard maximally parallel derivation modes as well as any of the maximally parallel
set derivation modes (i.e., non-extendable (multi)sets of rules, (multi)sets with maximal
number of rules, (multi)sets of rules a ecting the maximal number of objects). When
using the sequential derivation mode, at least the computational power of partially blind
register machines is obtained
The Busy Beaver Competition: a historical survey
Tibor Rado defined the Busy Beaver Competition in 1962. He used Turing
machines to give explicit definitions for some functions that are not
computable and grow faster than any computable function. He put forward the
problem of computing the values of these functions on numbers 1, 2, 3, ... More
and more powerful computers have made possible the computation of lower bounds
for these values. In 1988, Brady extended the definitions to functions on two
variables. We give a historical survey of these works. The successive record
holders in the Busy Beaver Competition are displayed, with their discoverers,
the date they were found, and, for some of them, an analysis of their behavior.Comment: 70 page
- …