573 research outputs found

    On-Line Paging against Adversarially Biased Random Inputs

    Full text link
    In evaluating an algorithm, worst-case analysis can be overly pessimistic. Average-case analysis can be overly optimistic. An intermediate approach is to show that an algorithm does well on a broad class of input distributions. Koutsoupias and Papadimitriou recently analyzed the least-recently-used (LRU) paging strategy in this manner, analyzing its performance on an input sequence generated by a so-called diffuse adversary -- one that must choose each request probabilitistically so that no page is chosen with probability more than some fixed epsilon>0. They showed that LRU achieves the optimal competitive ratio (for deterministic on-line algorithms), but they didn't determine the actual ratio. In this paper we estimate the optimal ratios within roughly a factor of two for both deterministic strategies (e.g. least-recently-used and first-in-first-out) and randomized strategies. Around the threshold epsilon ~ 1/k (where k is the cache size), the optimal ratios are both Theta(ln k). Below the threshold the ratios tend rapidly to O(1). Above the threshold the ratio is unchanged for randomized strategies but tends rapidly to Theta(k) for deterministic ones. We also give an alternate proof of the optimality of LRU.Comment: Conference version appeared in SODA '98 as "Bounding the Diffuse Adversary

    Quantum Algorithms for the Triangle Problem

    Full text link
    We present two new quantum algorithms that either find a triangle (a copy of K3K_{3}) in an undirected graph GG on nn nodes, or reject if GG is triangle free. The first algorithm uses combinatorial ideas with Grover Search and makes O~(n10/7)\tilde{O}(n^{10/7}) queries. The second algorithm uses O~(n13/10)\tilde{O}(n^{13/10}) queries, and it is based on a design concept of Ambainis~\cite{amb04} that incorporates the benefits of quantum walks into Grover search~\cite{gro96}. The first algorithm uses only O(logn)O(\log n) qubits in its quantum subroutines, whereas the second one uses O(n) qubits. The Triangle Problem was first treated in~\cite{bdhhmsw01}, where an algorithm with O(n+nm)O(n+\sqrt{nm}) query complexity was presented, where mm is the number of edges of GG.Comment: Several typos are fixed, and full proofs are included. Full version of the paper accepted to SODA'0

    Permissionless Clock Synchronization with Public Setup

    Get PDF
    The permissionless clock synchronization problem asks how it is possible for a population of parties to maintain a system-wide synchronized clock, while their participation rate fluctuates --- possibly very widely --- over time. The underlying assumption is that parties experience the passage of time with roughly the same speed, but however they may disengage and engage with the protocol following arbitrary (and even chosen adversarially) participation patterns. This (classical) problem has received renewed attention due to the advent of blockchain protocols, and recently it has been solved in the setting of proof of stake, i.e., when parties are assumed to have access to a trusted PKI setup [Badertscher et al., Eurocrypt ’21]. In this work, we present the first proof-of-work (PoW)-based permissionless clock synchronization protocol. Our construction assumes a public setup (e.g., a CRS) and relies on an honest majority of computational power that, for the first time, is described in a fine-grain timing model that does not utilize a global clock that exports the current time to all parties. As a secondary result of independent interest, our protocol gives rise to the first PoW-based ledger consensus protocol that does not rely on an external clock for the time-stamping of transactions and adjustment of the PoW difficulty

    Decoupling with unitary approximate two-designs

    Full text link
    Consider a bipartite system, of which one subsystem, A, undergoes a physical evolution separated from the other subsystem, R. One may ask under which conditions this evolution destroys all initial correlations between the subsystems A and R, i.e. decouples the subsystems. A quantitative answer to this question is provided by decoupling theorems, which have been developed recently in the area of quantum information theory. This paper builds on preceding work, which shows that decoupling is achieved if the evolution on A consists of a typical unitary, chosen with respect to the Haar measure, followed by a process that adds sufficient decoherence. Here, we prove a generalized decoupling theorem for the case where the unitary is chosen from an approximate two-design. A main implication of this result is that decoupling is physical, in the sense that it occurs already for short sequences of random two-body interactions, which can be modeled as efficient circuits. Our decoupling result is independent of the dimension of the R system, which shows that approximate 2-designs are appropriate for decoupling even if the dimension of this system is large.Comment: Published versio

    Cross-input Amortization Captures the Diffuse Adversary

    Get PDF
    Koutsoupias and Papadimitriou recently raised the question of how well deterministic on-line paging algorithms can do against a certain class of adversarially biased random inputs. Such an input is given in an on-line fashion; the adversary determines the next request probabilistically, subject to the constraint that no page may be requested with probability more than a fixed ϵ3˘e0\epsilon\u3e0. In this paper, we answer their question by estimating, within a factor of two, the optimal competitive ratio of any deterministic on-line strategy against this adversary. We further analyze randomized on-line strategies, obtaining upper and lower bounds within a factor of two. These estimates reveal the qualitative changes as ϵ\epsilon ranges continuously from 1 (the standard model) towards 0 (a severely handicapped adversary). The key to our upper bounds is a novel charging scheme that is appropriate for adversarially biased random inputs. The scheme adjusts the costs of each input so that the expected cost of a random input is unchanged, but working with adjusted costs, we can obtain worst-case bounds on a per-input basis. This lets us use worst-case analysis techniques while still thinking of some of the costs as expected costs
    corecore