9,153 research outputs found

    The Turing Machine on the Dissecting Table

    Get PDF
    Since the beginning of the twenty-first century there has been an increasing awareness that software rep- resents a blind spot in new media theory. The growing interest in software also influences the argument in this paper, which sets out from the assumption that Alan M. Turing's concept of the universal machine, the first theoretical description of a computer program, is a kind of bachelor machine. Previous writings based on a similar hypothesis have focused either on a comparison of the universal machine and the bachelor machine in terms of the similarities of their structural features, or they have taken the bachelor machine as a metaphor for a man or a computer. Unlike them, this paper stresses the importance of the con- text as a key to interpreting the universal Turing machine as a bachelor machine and, potentially, as a self-portrait

    A Universal Semi-totalistic Cellular Automaton on Kite and Dart Penrose Tilings

    Full text link
    In this paper we investigate certain properties of semi-totalistic cellular automata (CA) on the well known quasi-periodic kite and dart two dimensional tiling of the plane presented by Roger Penrose. We show that, despite the irregularity of the underlying grid, it is possible to devise a semi-totalistic CA capable of simulating any boolean circuit on this aperiodic tiling.Comment: In Proceedings AUTOMATA&JAC 2012, arXiv:1208.249

    Asymptotic Intrinsic Universality and Reprogrammability by Behavioural Emulation

    Full text link
    We advance a Bayesian concept of 'intrinsic asymptotic universality' taking to its final conclusions previous conceptual and numerical work based upon a concept of a reprogrammability test and an investigation of the complex qualitative behaviour of computer programs. Our method may quantify the trust and confidence of the computing capabilities of natural and classical systems, and quantify computers by their degree of reprogrammability. We test the method to provide evidence in favour of a conjecture concerning the computing capabilities of Busy Beaver Turing machines as candidates for Turing universality. The method has recently been used to quantify the number of 'intrinsically universal' cellular automata, with results that point towards the pervasiveness of universality due to a widespread capacity for emulation. Our method represents an unconventional approach to the classical and seminal concept of Turing universality, and it may be extended and applied in a broader context to natural computation, by (in something like the spirit of the Turing test) observing the behaviour of a system under circumstances where formal proofs of universality are difficult, if not impossible to come by.Comment: 16 pages, 7 images. Invited contribution in Advances in Unconventional Computation. A. Adamatzky (ed), Springer Verla

    Complexity of Small Universal Turing Machines: A Survey

    Get PDF
    We survey some work concerned with small universal Turing machines, cellular automata, tag systems, and other simple models of computation. For example it has been an open question for some time as to whether the smallest known universal Turing machines of Minsky, Rogozhin, Baiocchi and Kudlek are efficient (polynomial time) simulators of Turing machines. These are some of the most intuitively simple computational devices and previously the best known simulations were exponentially slow. We discuss recent work that shows that these machines are indeed efficient simulators. In addition, another related result shows that Rule 110, a well-known elementary cellular automaton, is efficiently universal. We also discuss some old and new universal program size results, including the smallest known universal Turing machines. We finish the survey with results on generalised and restricted Turing machine models including machines with a periodic background on the tape (instead of a blank symbol), multiple tapes, multiple dimensions, and machines that never write to their tape. We then discuss some ideas for future work

    Parallel Computation Is ESS

    Full text link
    There are enormous amount of examples of Computation in nature, exemplified across multiple species in biology. One crucial aim for these computations across all life forms their ability to learn and thereby increase the chance of their survival. In the current paper a formal definition of autonomous learning is proposed. From that definition we establish a Turing Machine model for learning, where rule tables can be added or deleted, but can not be modified. Sequential and parallel implementations of this model are discussed. It is found that for general purpose learning based on this model, the implementations capable of parallel execution would be evolutionarily stable. This is proposed to be of the reasons why in Nature parallelism in computation is found in abundance.Comment: Submitted to Theoretical Computer Science - Elsevie

    Busy Beaver Scores and Alphabet Size

    Full text link
    We investigate the Busy Beaver Game introduced by Rado (1962) generalized to non-binary alphabets. Harland (2016) conjectured that activity (number of steps) and productivity (number of non-blank symbols) of candidate machines grow as the alphabet size increases. We prove this conjecture for any alphabet size under the condition that the number of states is sufficiently large. For the measure activity we show that increasing the alphabet size from two to three allows an increase. By a classical construction it is even possible to obtain a two-state machine increasing activity and productivity of any machine if we allow an alphabet size depending on the number of states of the original machine. We also show that an increase of the alphabet by a factor of three admits an increase of activity

    The Information-theoretic and Algorithmic Approach to Human, Animal and Artificial Cognition

    Full text link
    We survey concepts at the frontier of research connecting artificial, animal and human cognition to computation and information processing---from the Turing test to Searle's Chinese Room argument, from Integrated Information Theory to computational and algorithmic complexity. We start by arguing that passing the Turing test is a trivial computational problem and that its pragmatic difficulty sheds light on the computational nature of the human mind more than it does on the challenge of artificial intelligence. We then review our proposed algorithmic information-theoretic measures for quantifying and characterizing cognition in various forms. These are capable of accounting for known biases in human behavior, thus vindicating a computational algorithmic view of cognition as first suggested by Turing, but this time rooted in the concept of algorithmic probability, which in turn is based on computational universality while being independent of computational model, and which has the virtue of being predictive and testable as a model theory of cognitive behavior.Comment: 22 pages. Forthcoming in Gordana Dodig-Crnkovic and Raffaela Giovagnoli (eds). Representation and Reality: Humans, Animals and Machines, Springer Verla

    Computability Logic: a formal theory of interaction

    Full text link
    Computability logic is a formal theory of (interactive) computability in the same sense as classical logic is a formal theory of truth. This approach was initiated very recently in "Introduction to computability logic" (Annals of Pure and Applied Logic 123 (2003), pp.1-99). The present paper reintroduces computability logic in a more compact and less technical way. It is written in a semitutorial style with a general computer science, logic or mathematics audience in mind. An Internet source on the subject is available at http://www.cis.upenn.edu/~giorgi/cl.html, and additional material at http://www.csc.villanova.edu/~japaridz/CL/gsoll.html

    On the universality of cognitive tests

    Full text link
    The analysis of the adaptive behaviour of many different kinds of systems such as humans, animals and machines, requires more general ways of assessing their cognitive abilities. This need is strengthened by increasingly more tasks being analysed for and completed by a wider diversity of systems, including swarms and hybrids. The notion of universal test has recently emerged in the context of machine intelligence evaluation as a way to define and use the same cognitive test for a variety of systems, using some principled tasks and adapting the interface to each particular subject. However, how far can universal tests be taken? This paper analyses this question in terms of subjects, environments, space-time resolution, rewards and interfaces. This leads to a number of findings, insights and caveats, according to several levels where universal tests may be progressively more difficult to conceive, implement and administer. One of the most significant contributions is given by the realisation that more universal tests are defined as maximisations of less universal tests for a variety of configurations. This means that universal tests must be necessarily adaptive

    Algorithmic Networks: central time to trigger expected emergent open-endedness

    Full text link
    This article investigates emergence and complexity in complex systems that can share information on a network. To this end, we use a theoretical approach from information theory, computability theory, and complex networks. One key studied question is how much emergent complexity (or information) arises when a population of computable systems is networked compared with when this population is isolated. First, we define a general model for networked theoretical machines, which we call algorithmic networks. Then, we narrow our scope to investigate algorithmic networks that optimize the average fitnesses of nodes in a scenario in which each node imitates the fittest neighbor and the randomly generated population is networked by a time-varying graph. We show that there are graph-topological conditions that cause these algorithmic networks to have the property of expected emergent open-endedness for large enough populations. In other words, the expected emergent algorithmic complexity of a node tends to infinity as the population size tends to infinity. Given a dynamic network, we show that these conditions imply the existence of a central time to trigger expected emergent open-endedness. Moreover, we show that networks with small diameter compared to the network size meet these conditions. We also discuss future research based on how our results are related to some problems in network science, information theory, computability theory, distributed computing, game theory, evolutionary biology, and synergy in complex systems.Comment: This is a revised version of the research report no. 4/2018 at the National Laboratory for Scientific Computing (LNCC), Brazi
    • …
    corecore