14 research outputs found

    Empirical Encounters with Computational Irreducibility and Unpredictability

    Full text link
    There are several forms of irreducibility in computing systems, ranging from undecidability to intractability to nonlinearity. This paper is an exploration of the conceptual issues that have arisen in the course of investigating speed-up and slowdown phenomena in small Turing machines. We present the results of a test that may spur experimental approaches to the notion of computational irreducibility. The test involves a systematic attempt to outrun the computation of a large number of small Turing machines (all 3 and 4 state, 2 symbol) by means of integer sequence prediction using a specialized function finder program. This massive experiment prompts an investigation into rates of convergence of decision procedures and the decidability of sets in addition to a discussion of the (un)predictability of deterministic computing systems in practice. We think this investigation constitutes a novel approach to the discussion of an epistemological question in the context of a computer simulation, and thus represents an interesting exploration at the boundary between philosophical concerns and computational experiments.Comment: 18 pages, 4 figure

    On Low for Speed Oracles

    Get PDF
    Relativizing computations of Turing machines to an oracle is a central concept in the theory of computation, both in complexity theory and in computability theory(!). Inspired by lowness notions from computability theory, Allender introduced the concept of "low for speed" oracles. An oracle A is low for speed if relativizing to A has essentially no effect on computational complexity, meaning that if a decidable language can be decided in time f(n) with access to oracle A, then it can be decided in time poly(f(n)) without any oracle. The existence of non-computable such A\u27s was later proven by Bayer and Slaman, who even constructed a computably enumerable one, and exhibited a number of properties of these oracles as well as interesting connections with computability theory. In this paper, we pursue this line of research, answering the questions left by Bayer and Slaman and give further evidence that the structure of the class of low for speed oracles is a very rich one

    Ten years of speedup : (prepublication)

    Get PDF

    The complexity of the word problems for commutative semigroups and polynomial ideals

    Get PDF
    AbstractAny decision procedure for the word problems for commutative semigroups and polynomial deals inherently requires computational storage space growing exponentially with the size of the problem instance to which the procedure is applied. This bound is achieved by a simple procedure for the semigroup problem

    Development and Analysis of a Gravity-Simulated Particle-Packing Algorithm for Modeling Optimized Rocket Propellants

    Get PDF
    The random heterogeneous morphology of modern solid rocket propellant formulations has traditionally been difficult to characterize and quantify. Current computational simulations of these formulations require an accurate description of the packing arrangement in order to correctly model the complex geometric effects that stem from the random morphology. A new and novel computational packing algorithm was invented, implemented, and analyzed using various particle starting arrangements. This was intended to be fast for use in combinatorial chemistry applications and to provide a numerical representation of the material for use with other computational tools, including codes that predict combustion behavior. The packing algorithms were evaluated using homogeneous distributions of spherical particles. Both the Radial Distribution Function (RDF) and the packing fraction were used to evaluate the validity of the invented algorithm

    Ultimate Cognition à la Gödel

    Get PDF
    "All life is problem solving,” said Popper. To deal with arbitrary problems in arbitrary environments, an ultimate cognitive agent should use its limited hardware in the "best” and "most efficient” possible way. Can we formally nail down this informal statement, and derive a mathematically rigorous blueprint of ultimate cognition? Yes, we can, using Kurt Gödel's celebrated self-reference trick of 1931 in a new way. Gödel exhibited the limits of mathematics and computation by creating a formula that speaks about itself, claiming to be unprovable by an algorithmic theorem prover: either the formula is true but unprovable, or math itself is flawed in an algorithmic sense. Here we describe an agent-controlling program that speaks about itself, ready to rewrite itself in arbitrary fashion once it has found a proof that the rewrite is useful according to a user-defined utility function. Any such a rewrite is necessarily globally optimal—no local maxima!—since this proof necessarily must have demonstrated the uselessness of continuing the proof search for even better rewrites. Our self-referential program will optimally speed up its proof searcher and other program parts, but only if the speed up's utility is indeed provable—even ultimate cognition has limits of the Gödelian kin

    The Speedup Theorem in a Primitive Recursive Framework

    Full text link
    Blum’s speedup theorem is a major theorem in computational com-plexity, showing the existence of computable functions for which no optimal program can exist: for any speedup function r there ex-ists a function fr such that for any program computing fr we can find an alternative program computing it with the desired speedup r. The main corollary is that algorithmic problems do not have, in general, a inherent complexity. Traditional proofs of the speedup theorem make an essential use of Kleene’s fix point theorem to close a suitable diagonal argument. As a consequence, very little is known about its validity in subrecursive settings, where there is no universal machine, and no fixpoints. In this article we discuss an alternative, formal proof of the speedup theorem that allows us to spare the invocation of the fix point theorem and sheds more light on the actual complexity of the function fr
    corecore