28,216 research outputs found

    Finite automata with advice tapes

    Full text link
    We define a model of advised computation by finite automata where the advice is provided on a separate tape. We consider several variants of the model where the advice is deterministic or randomized, the input tape head is allowed real-time, one-way, or two-way access, and the automaton is classical or quantum. We prove several separation results among these variants, demonstrate an infinite hierarchy of language classes recognized by automata with increasing advice lengths, and establish the relationships between this and the previously studied ways of providing advice to finite automata.Comment: Corrected typo

    Quantifying Resource Use in Computations

    Get PDF
    It is currently not possible to quantify the resources needed to perform a computation. As a consequence, it is not possible to reliably evaluate the hardware resources needed for the application of algorithms or the running of programs. This is apparent in both computer science, for instance, in cryptanalysis, and in neuroscience, for instance, comparative neuro-anatomy. A System versus Environment game formalism is proposed based on Computability Logic that allows to define a computational work function that describes the theoretical and physical resources needed to perform any purely algorithmic computation. Within this formalism, the cost of a computation is defined as the sum of information storage over the steps of the computation. The size of the computational device, eg, the action table of a Universal Turing Machine, the number of transistors in silicon, or the number and complexity of synapses in a neural net, is explicitly included in the computational cost. The proposed cost function leads in a natural way to known computational trade-offs and can be used to estimate the computational capacity of real silicon hardware and neural nets. The theory is applied to a historical case of 56 bit DES key recovery, as an example of application to cryptanalysis. Furthermore, the relative computational capacities of human brain neurons and the C. elegans nervous system are estimated as an example of application to neural nets.Comment: 26 pages, no figure

    NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications, volume 1

    Get PDF
    Papers and viewgraphs from the conference are presented. This conference served as a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disks and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe, among other things, integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's

    Evolving MultiAlgebras unify all usual sequential computation models

    Get PDF
    It is well-known that Abstract State Machines (ASMs) can simulate "step-by-step" any type of machines (Turing machines, RAMs, etc.). We aim to overcome two facts: 1) simulation is not identification, 2) the ASMs simulating machines of some type do not constitute a natural class among all ASMs. We modify Gurevich's notion of ASM to that of EMA ("Evolving MultiAlgebra") by replacing the program (which is a syntactic object) by a semantic object: a functional which has to be very simply definable over the static part of the ASM. We prove that very natural classes of EMAs correspond via "literal identifications" to slight extensions of the usual machine models and also to grammar models. Though we modify these models, we keep their computation approach: only some contingencies are modified. Thus, EMAs appear as the mathematical model unifying all kinds of sequential computation paradigms.Comment: 12 pages, Symposium on Theoretical Aspects of Computer Scienc

    Quantum Branching Programs and Space-Bounded Nonuniform Quantum Complexity

    Get PDF
    In this paper, the space complexity of nonuniform quantum computations is investigated. The model chosen for this are quantum branching programs, which provide a graphic description of sequential quantum algorithms. In the first part of the paper, simulations between quantum branching programs and nonuniform quantum Turing machines are presented which allow to transfer lower and upper bound results between the two models. In the second part of the paper, different variants of quantum OBDDs are compared with their deterministic and randomized counterparts. In the third part, quantum branching programs are considered where the performed unitary operation may depend on the result of a previous measurement. For this model a simulation of randomized OBDDs and exponential lower bounds are presented.Comment: 45 pages, 3 Postscript figures. Proofs rearranged, typos correcte

    A user's evaluation of a NASA Regional Dissemination center

    Get PDF
    Retrospective searches and information processing and retrieval in NASA Regional Dissemination Cente

    The Computational Power of Beeps

    Full text link
    In this paper, we study the quantity of computational resources (state machine states and/or probabilistic transition precision) needed to solve specific problems in a single hop network where nodes communicate using only beeps. We begin by focusing on randomized leader election. We prove a lower bound on the states required to solve this problem with a given error bound, probability precision, and (when relevant) network size lower bound. We then show the bound tight with a matching upper bound. Noting that our optimal upper bound is slow, we describe two faster algorithms that trade some state optimality to gain efficiency. We then turn our attention to more general classes of problems by proving that once you have enough states to solve leader election with a given error bound, you have (within constant factors) enough states to simulate correctly, with this same error bound, a logspace TM with a constant number of unary input tapes: allowing you to solve a large and expressive set of problems. These results identify a key simplicity threshold beyond which useful distributed computation is possible in the beeping model.Comment: Extended abstract to appear in the Proceedings of the International Symposium on Distributed Computing (DISC 2015

    Bounded Counter Languages

    Full text link
    We show that deterministic finite automata equipped with kk two-way heads are equivalent to deterministic machines with a single two-way input head and k1k-1 linearly bounded counters if the accepted language is strictly bounded, i.e., a subset of a1a2...ama_1^*a_2^*... a_m^* for a fixed sequence of symbols a1,a2,...,ama_1, a_2,..., a_m. Then we investigate linear speed-up for counter machines. Lower and upper time bounds for concrete recognition problems are shown, implying that in general linear speed-up does not hold for counter machines. For bounded languages we develop a technique for speeding up computations by any constant factor at the expense of adding a fixed number of counters

    Robo-line storage: Low latency, high capacity storage systems over geographically distributed networks

    Get PDF
    Rapid advances in high performance computing are making possible more complete and accurate computer-based modeling of complex physical phenomena, such as weather front interactions, dynamics of chemical reactions, numerical aerodynamic analysis of airframes, and ocean-land-atmosphere interactions. Many of these 'grand challenge' applications are as demanding of the underlying storage system, in terms of their capacity and bandwidth requirements, as they are on the computational power of the processor. A global view of the Earth's ocean chlorophyll and land vegetation requires over 2 terabytes of raw satellite image data. In this paper, we describe our planned research program in high capacity, high bandwidth storage systems. The project has four overall goals. First, we will examine new methods for high capacity storage systems, made possible by low cost, small form factor magnetic and optical tape systems. Second, access to the storage system will be low latency and high bandwidth. To achieve this, we must interleave data transfer at all levels of the storage system, including devices, controllers, servers, and communications links. Latency will be reduced by extensive caching throughout the storage hierarchy. Third, we will provide effective management of a storage hierarchy, extending the techniques already developed for the Log Structured File System. Finally, we will construct a protototype high capacity file server, suitable for use on the National Research and Education Network (NREN). Such research must be a Cornerstone of any coherent program in high performance computing and communications
    corecore