5,370 research outputs found
Coding-theorem Like Behaviour and Emergence of the Universal Distribution from Resource-bounded Algorithmic Probability
Previously referred to as `miraculous' in the scientific literature because
of its powerful properties and its wide application as optimal solution to the
problem of induction/inference, (approximations to) Algorithmic Probability
(AP) and the associated Universal Distribution are (or should be) of the
greatest importance in science. Here we investigate the emergence, the rates of
emergence and convergence, and the Coding-theorem like behaviour of AP in
Turing-subuniversal models of computation. We investigate empirical
distributions of computing models in the Chomsky hierarchy. We introduce
measures of algorithmic probability and algorithmic complexity based upon
resource-bounded computation, in contrast to previously thoroughly investigated
distributions produced from the output distribution of Turing machines. This
approach allows for numerical approximations to algorithmic
(Kolmogorov-Chaitin) complexity-based estimations at each of the levels of a
computational hierarchy. We demonstrate that all these estimations are
correlated in rank and that they converge both in rank and values as a function
of computational power, despite fundamental differences between computational
models. In the context of natural processes that operate below the Turing
universal level because of finite resources and physical degradation, the
investigation of natural biases stemming from algorithmic rules may shed light
on the distribution of outcomes. We show that up to 60\% of the
simplicity/complexity bias in distributions produced even by the weakest of the
computational models can be accounted for by Algorithmic Probability in its
approximation to the Universal Distribution.Comment: 27 pages main text, 39 pages including supplement. Online complexity
calculator: http://complexitycalculator.com
On the Complexity of Value Iteration
Value iteration is a fundamental algorithm for solving Markov Decision Processes (MDPs). It computes the maximal n-step payoff by iterating n times a recurrence equation which is naturally associated to the MDP. At the same time, value iteration provides a policy for the MDP that is optimal on a given finite horizon n. In this paper, we settle the computational complexity of value iteration. We show that, given a horizon n in binary and an MDP, computing an optimal policy is EXPTIME-complete, thus resolving an open problem that goes back to the seminal 1987 paper on the complexity of MDPs by Papadimitriou and Tsitsiklis. To obtain this main result, we develop several stepping stones that yield results of an independent interest. For instance, we show that it is EXPTIME-complete to compute the n-fold iteration (with n in binary) of a function given by a straight-line program over the integers with max and + as operators. We also provide new complexity results for the bounded halting problem in linear-update counter machines
Depth, Highness and DNR degrees
We study Bennett deep sequences in the context of recursion theory; in
particular we investigate the notions of O(1)-deepK, O(1)-deepC , order-deep K
and order-deep C sequences. Our main results are that Martin-Loef random sets
are not order-deepC , that every many-one degree contains a set which is not
O(1)-deepC , that O(1)-deepC sets and order-deepK sets have high or DNR Turing
degree and that no K-trival set is O(1)-deepK.Comment: journal version, dmtc
The quantum measurement problem and physical reality: a computation theoretic perspective
Is the universe computable? If yes, is it computationally a polynomial place?
In standard quantum mechanics, which permits infinite parallelism and the
infinitely precise specification of states, a negative answer to both questions
is not ruled out. On the other hand, empirical evidence suggests that
NP-complete problems are intractable in the physical world. Likewise,
computational problems known to be algorithmically uncomputable do not seem to
be computable by any physical means. We suggest that this close correspondence
between the efficiency and power of abstract algorithms on the one hand, and
physical computers on the other, finds a natural explanation if the universe is
assumed to be algorithmic; that is, that physical reality is the product of
discrete sub-physical information processing equivalent to the actions of a
probabilistic Turing machine. This assumption can be reconciled with the
observed exponentiality of quantum systems at microscopic scales, and the
consequent possibility of implementing Shor's quantum polynomial time algorithm
at that scale, provided the degree of superposition is intrinsically, finitely
upper-bounded. If this bound is associated with the quantum-classical divide
(the Heisenberg cut), a natural resolution to the quantum measurement problem
arises. From this viewpoint, macroscopic classicality is an evidence that the
universe is in BPP, and both questions raised above receive affirmative
answers. A recently proposed computational model of quantum measurement, which
relates the Heisenberg cut to the discreteness of Hilbert space, is briefly
discussed. A connection to quantum gravity is noted. Our results are compatible
with the philosophy that mathematical truths are independent of the laws of
physics.Comment: Talk presented at "Quantum Computing: Back Action 2006", IIT Kanpur,
India, March 200
Finite state verifiers with constant randomness
We give a new characterization of as the class of languages
whose members have certificates that can be verified with small error in
polynomial time by finite state machines that use a constant number of random
bits, as opposed to its conventional description in terms of deterministic
logarithmic-space verifiers. It turns out that allowing two-way interaction
with the prover does not change the class of verifiable languages, and that no
polynomially bounded amount of randomness is useful for constant-memory
computers when used as language recognizers, or public-coin verifiers. A
corollary of our main result is that the class of outcome problems
corresponding to O(log n)-space bounded games of incomplete information where
the universal player is allowed a constant number of moves equals NL.Comment: 17 pages. An improved versio
- …