38 research outputs found

    On efficient simulations of multicounter machines

    Get PDF
    An oblivious 1-tape Turing machine can simulate a multicounter machine on-line in linear time and logarithmic space. This leads to a linear cost combinational logic network implementing the first n steps of a multicounter machine and also to a linear time/logarithmic space on-line simulation by an oblivious logarithmic cost RAM. An oblivious log*n-head tape unit can simulate the first n steps of a multicounter machine in real-time, which leads to a linear cost combinational logic network with a constant data rate

    An optimal simulation of counter machines: the ACM case

    Get PDF

    Cumulative subject index volumes 52-55

    Get PDF

    An Optimal Simulation of Counter Machines: The ACM Case

    Full text link

    Bounded Counter Languages

    Full text link
    We show that deterministic finite automata equipped with kk two-way heads are equivalent to deterministic machines with a single two-way input head and k−1k-1 linearly bounded counters if the accepted language is strictly bounded, i.e., a subset of a1∗a2∗...am∗a_1^*a_2^*... a_m^* for a fixed sequence of symbols a1,a2,...,ama_1, a_2,..., a_m. Then we investigate linear speed-up for counter machines. Lower and upper time bounds for concrete recognition problems are shown, implying that in general linear speed-up does not hold for counter machines. For bounded languages we develop a technique for speeding up computations by any constant factor at the expense of adding a fixed number of counters

    An Optimal Simulation of Counter Machines

    Full text link

    Counting is easy

    Get PDF

    IST Austria Thesis

    Get PDF
    The scalability of concurrent data structures and distributed algorithms strongly depends on reducing the contention for shared resources and the costs of synchronization and communication. We show how such cost reductions can be attained by relaxing the strict consistency conditions required by sequential implementations. In the first part of the thesis, we consider relaxation in the context of concurrent data structures. Specifically, in data structures such as priority queues, imposing strong semantics renders scalability impossible, since a correct implementation of the remove operation should return only the element with highest priority. Intuitively, attempting to invoke remove operations concurrently creates a race condition. This bottleneck can be circumvented by relaxing semantics of the affected data structure, thus allowing removal of the elements which are no longer required to have the highest priority. We prove that the randomized implementations of relaxed data structures provide provable guarantees on the priority of the removed elements even under concurrency. Additionally, we show that in some cases the relaxed data structures can be used to scale the classical algorithms which are usually implemented with the exact ones. In the second part, we study parallel variants of the stochastic gradient descent (SGD) algorithm, which distribute computation among the multiple processors, thus reducing the running time. Unfortunately, in order for standard parallel SGD to succeed, each processor has to maintain a local copy of the necessary model parameter, which is identical to the local copies of other processors; the overheads from this perfect consistency in terms of communication and synchronization can negate the speedup gained by distributing the computation. We show that the consistency conditions required by SGD can be relaxed, allowing the algorithm to be more flexible in terms of tolerating quantized communication, asynchrony, or even crash faults, while its convergence remains asymptotically the same
    corecore