10,088 research outputs found
Quantum Random Self-Modifiable Computation
Among the fundamental questions in computer science, at least two have a deep
impact on mathematics. What can computation compute? How many steps does a
computation require to solve an instance of the 3-SAT problem? Our work
addresses the first question, by introducing a new model called the ex-machine.
The ex-machine executes Turing machine instructions and two special types of
instructions. Quantum random instructions are physically realizable with a
quantum random number generator. Meta instructions can add new states and add
new instructions to the ex-machine. A countable set of ex-machines is
constructed, each with a finite number of states and instructions; each
ex-machine can compute a Turing incomputable language, whenever the quantum
randomness measurements behave like unbiased Bernoulli trials. In 1936, Alan
Turing posed the halting problem for Turing machines and proved that this
problem is unsolvable for Turing machines. Consider an enumeration E_a(i) =
(M_i, T_i) of all Turing machines M_i and initial tapes T_i. Does there exist
an ex-machine X that has at least one evolutionary path X --> X_1 --> X_2 --> .
. . --> X_m, so at the mth stage ex-machine X_m can correctly determine for 0
<= i <= m whether M_i's execution on tape T_i eventually halts? We demonstrate
an ex-machine Q(x) that has one such evolutionary path. The existence of this
evolutionary path suggests that David Hilbert was not misguided to propose in
1900 that mathematicians search for finite processes to help construct
mathematical proofs. Our refinement is that we cannot use a fixed computer
program that behaves according to a fixed set of mechanical rules. We must
pursue methods that exploit randomness and self-modification so that the
complexity of the program can increase as it computes.Comment: 50 pages, 3 figure
Signatures of Infinity: Nonergodicity and Resource Scaling in Prediction, Complexity, and Learning
We introduce a simple analysis of the structural complexity of
infinite-memory processes built from random samples of stationary, ergodic
finite-memory component processes. Such processes are familiar from the well
known multi-arm Bandit problem. We contrast our analysis with
computation-theoretic and statistical inference approaches to understanding
their complexity. The result is an alternative view of the relationship between
predictability, complexity, and learning that highlights the distinct ways in
which informational and correlational divergences arise in complex ergodic and
nonergodic processes. We draw out consequences for the resource divergences
that delineate the structural hierarchy of ergodic processes and for processes
that are themselves hierarchical.Comment: 8 pages, 1 figure; http://csc.ucdavis.edu/~cmg/compmech/pubs/soi.pd
- …