643 research outputs found

    Compositional competitiveness for distributed algorithms

    Full text link
    We define a measure of competitive performance for distributed algorithms based on throughput, the number of tasks that an algorithm can carry out in a fixed amount of work. This new measure complements the latency measure of Ajtai et al., which measures how quickly an algorithm can finish tasks that start at specified times. The novel feature of the throughput measure, which distinguishes it from the latency measure, is that it is compositional: it supports a notion of algorithms that are competitive relative to a class of subroutines, with the property that an algorithm that is k-competitive relative to a class of subroutines, combined with an l-competitive member of that class, gives a combined algorithm that is kl-competitive. In particular, we prove the throughput-competitiveness of a class of algorithms for collect operations, in which each of a group of n processes obtains all values stored in an array of n registers. Collects are a fundamental building block of a wide variety of shared-memory distributed algorithms, and we show that several such algorithms are competitive relative to collects. Inserting a competitive collect in these algorithms gives the first examples of competitive distributed algorithms obtained by composition using a general construction.Comment: 33 pages, 2 figures; full version of STOC 96 paper titled "Modular competitiveness for distributed algorithms.

    On-Line File Caching

    Full text link
    In the on-line file-caching problem problem, the input is a sequence of requests for files, given on-line (one at a time). Each file has a non-negative size and a non-negative retrieval cost. The problem is to decide which files to keep in a fixed-size cache so as to minimize the sum of the retrieval costs for files that are not in the cache when requested. The problem arises in web caching by browsers and by proxies. This paper describes a natural generalization of LRU called Landlord and gives an analysis showing that it has an optimal performance guarantee (among deterministic on-line algorithms). The paper also gives an analysis of the algorithm in a so-called ``loosely'' competitive model, showing that on a ``typical'' cache size, either the performance guarantee is O(1) or the total retrieval cost is insignificant.Comment: ACM-SIAM Symposium on Discrete Algorithms (1998

    Quantum Algorithms for Identifying Hidden Strings with Applications to Matroid Problems

    Full text link
    In this paper, we explore quantum speedups for the problem, inspired by matroid theory, of identifying a pair of nn-bit binary strings that are promised to have the same number of 1s and differ in exactly two bits, by using the max inner product oracle and the sub-set oracle. More specifically, given two string s,s{0,1}ns, s'\in\{0, 1\}^n satisfying the above constraints, for any x{0,1}nx\in\{0, 1\}^n the max inner product oracle Omax(x)O_{max}(x) returns the max value between sxs\cdot x and sxs'\cdot x, and the sub-set oracle Osub(x)O_{sub}(x) indicates whether the index set of the 1s in xx is a subset of that in ss or ss'. We present a quantum algorithm consuming O(1)O(1) queries to the max inner product oracle for identifying the pair {s,s}\{s, s'\}, and prove that any classical algorithm requires Ω(n/log2n)\Omega(n/\log_{2}n) queries. Also, we present a quantum algorithm consuming n2+O(n)\frac{n}{2}+O(\sqrt{n}) queries to the subset oracle, and prove that any classical algorithm requires at least n+Ω(1)n+\Omega(1) queries. Therefore, quantum speedups are revealed in the two oracle models. Furthermore, the above results are applied to the problem in matroid theory of finding all the bases of a 2-bases matroid, where a matroid is called kk-bases if it has kk bases

    School of Law Annual Report 1995-1996

    Get PDF
    The annual report for the University of New Mexico School of Law for the period July 1995 through June 1996

    Post-quantum cryptography

    Get PDF
    Cryptography is essential for the security of online communication, cars and implanted medical devices. However, many commonly used cryptosystems will be completely broken once large quantum computers exist. Post-quantum cryptography is cryptography under the assumption that the attacker has a large quantum computer; post-quantum cryptosystems strive to remain secure even in this scenario. This relatively young research area has seen some successes in identifying mathematical operations for which quantum algorithms offer little advantage in speed, and then building cryptographic systems around those. The central challenge in post-quantum cryptography is to meet demands for cryptographic usability and flexibility without sacrificing confidence.</p

    On the role of entanglement and correlations in mixed-state quantum computation

    Get PDF
    In a quantum computation with pure states, the generation of large amounts of entanglement is known to be necessary for a speedup with respect to classical computations. However, examples of quantum computations with mixed states are known, such as the deterministic computation with one quantum qubit (DQC1) model [Knill and Laflamme, Phys. Rev. Lett. 81, 5672 (1998)], in which entanglement is at most marginally present, and yet a computational speedup is believed to occur. Correlations, and not entanglement, have been identified as a necessary ingredient for mixed-state quantum computation speedups. Here we show that correlations, as measured through the operator Schmidt rank, are indeed present in large amounts in the DQC1 circuit. This provides evidence for the preclusion of efficient classical simulation of DQC1 by means of a whole class of classical simulation algorithms, thereby reinforcing the conjecture that DQC1 leads to a genuine quantum computational speedup

    A Taxonomy of Workflow Management Systems for Grid Computing

    Full text link
    With the advent of Grid and application technologies, scientists and engineers are building more and more complex applications to manage and process large data sets, and execute scientific experiments on distributed resources. Such application scenarios require means for composing and executing complex workflows. Therefore, many efforts have been made towards the development of workflow management systems for Grid computing. In this paper, we propose a taxonomy that characterizes and classifies various approaches for building and executing workflows on Grids. We also survey several representative Grid workflow systems developed by various projects world-wide to demonstrate the comprehensiveness of the taxonomy. The taxonomy not only highlights the design and engineering similarities and differences of state-of-the-art in Grid workflow systems, but also identifies the areas that need further research.Comment: 29 pages, 15 figure
    corecore