5,146 research outputs found
A Swiss Pocket Knife for Computability
This research is about operational- and complexity-oriented aspects of
classical foundations of computability theory. The approach is to re-examine
some classical theorems and constructions, but with new criteria for success
that are natural from a programming language perspective.
Three cornerstones of computability theory are the S-m-ntheorem; Turing's
"universal machine"; and Kleene's second recursion theorem. In today's
programming language parlance these are respectively partial evaluation,
self-interpretation, and reflection. In retrospect it is fascinating that
Kleene's 1938 proof is constructive; and in essence builds a self-reproducing
program.
Computability theory originated in the 1930s, long before the invention of
computers and programs. Its emphasis was on delimiting the boundaries of
computability. Some milestones include 1936 (Turing), 1938 (Kleene), 1967
(isomorphism of programming languages), 1985 (partial evaluation), 1989 (theory
implementation), 1993 (efficient self-interpretation) and 2006 (term register
machines).
The "Swiss pocket knife" of the title is a programming language that allows
efficient computer implementation of all three computability cornerstones,
emphasising the third: Kleene's second recursion theorem. We describe
experiments with a tree-based computational model aiming for both fast program
generation and fast execution of the generated programs.Comment: In Proceedings Festschrift for Dave Schmidt, arXiv:1309.455
Computability and Complexity from a Programming Perspective (MFPS Draft preview)
AbstractThe author's forthcoming book proves central results in computability and complexity theory from a programmer-oriented perspective. In addition to giving more natural definitions, proofs and perspectives on classical theorems by Cook, Hartmanis, Savitch, etc., some new results have come from the alternative approach.One: for a computation model more natural than the Turing machine, multiplying the available problem-solving time provably increases problem-solving power (in general not true for Turing machines). Another: the class of decision problems solvable by Wadler's “treeless” programs [8], or by cons-free programs on Lisp-like lists, are identical with the well-studied complexity class LOGSPACE.A third is that cons-free programs augmented with recursion can solve all and only PTIME problems. Paradoxically, these programs often run in exponential time (not a contradiction, since they can be simulated in polynomial time by memoization.) This tradeoff indicates a tension between running time and memory space which seems worth further investigation
A Survey on Continuous Time Computations
We provide an overview of theories of continuous time computation. These
theories allow us to understand both the hardness of questions related to
continuous time dynamical systems and the computational power of continuous
time analog models. We survey the existing models, summarizing results, and
point to relevant references in the literature
The prospects for mathematical logic in the twenty-first century
The four authors present their speculations about the future developments of
mathematical logic in the twenty-first century. The areas of recursion theory,
proof theory and logic for computer science, model theory, and set theory are
discussed independently.Comment: Association for Symbolic Logi
Computability and analysis: the legacy of Alan Turing
We discuss the legacy of Alan Turing and his impact on computability and
analysis.Comment: 49 page
- …