30,957 research outputs found

    The K-Server Dual and Loose Competitiveness for Paging

    Full text link
    This paper has two results. The first is based on the surprising observation that the well-known ``least-recently-used'' paging algorithm and the ``balance'' algorithm for weighted caching are linear-programming primal-dual algorithms. This observation leads to a strategy (called ``Greedy-Dual'') that generalizes them both and has an optimal performance guarantee for weighted caching. For the second result, the paper presents empirical studies of paging algorithms, documenting that in practice, on ``typical'' cache sizes and sequences, the performance of paging strategies are much better than their worst-case analyses in the standard model suggest. The paper then presents theoretical results that support and explain this. For example: on any input sequence, with almost all cache sizes, either the performance guarantee of least-recently-used is O(log k) or the fault rate (in an absolute sense) is insignificant. Both of these results are strengthened and generalized in``On-line File Caching'' (1998).Comment: conference version: "On-Line Caching as Cache Size Varies", SODA (1991

    On-Line File Caching

    Full text link
    In the on-line file-caching problem problem, the input is a sequence of requests for files, given on-line (one at a time). Each file has a non-negative size and a non-negative retrieval cost. The problem is to decide which files to keep in a fixed-size cache so as to minimize the sum of the retrieval costs for files that are not in the cache when requested. The problem arises in web caching by browsers and by proxies. This paper describes a natural generalization of LRU called Landlord and gives an analysis showing that it has an optimal performance guarantee (among deterministic on-line algorithms). The paper also gives an analysis of the algorithm in a so-called ``loosely'' competitive model, showing that on a ``typical'' cache size, either the performance guarantee is O(1) or the total retrieval cost is insignificant.Comment: ACM-SIAM Symposium on Discrete Algorithms (1998

    On Dynamic Optimality for Binary Search Trees

    Full text link
    Does there exist O(1)-competitive (self-adjusting) binary search tree (BST) algorithms? This is a well-studied problem. A simple offline BST algorithm GreedyFuture was proposed independently by Lucas and Munro, and they conjectured it to be O(1)-competitive. Recently, Demaine et al. gave a geometric view of the BST problem. This view allowed them to give an online algorithm GreedyArb with the same cost as GreedyFuture. However, no o(n)-competitive ratio was known for GreedyArb. In this paper we make progress towards proving O(1)-competitive ratio for GreedyArb by showing that it is O(\log n)-competitive

    Compositional competitiveness for distributed algorithms

    Full text link
    We define a measure of competitive performance for distributed algorithms based on throughput, the number of tasks that an algorithm can carry out in a fixed amount of work. This new measure complements the latency measure of Ajtai et al., which measures how quickly an algorithm can finish tasks that start at specified times. The novel feature of the throughput measure, which distinguishes it from the latency measure, is that it is compositional: it supports a notion of algorithms that are competitive relative to a class of subroutines, with the property that an algorithm that is k-competitive relative to a class of subroutines, combined with an l-competitive member of that class, gives a combined algorithm that is kl-competitive. In particular, we prove the throughput-competitiveness of a class of algorithms for collect operations, in which each of a group of n processes obtains all values stored in an array of n registers. Collects are a fundamental building block of a wide variety of shared-memory distributed algorithms, and we show that several such algorithms are competitive relative to collects. Inserting a competitive collect in these algorithms gives the first examples of competitive distributed algorithms obtained by composition using a general construction.Comment: 33 pages, 2 figures; full version of STOC 96 paper titled "Modular competitiveness for distributed algorithms.

    Improved Online Algorithm for Weighted Flow Time

    Full text link
    We discuss one of the most fundamental scheduling problem of processing jobs on a single machine to minimize the weighted flow time (weighted response time). Our main result is a O(logP)O(\log P)-competitive algorithm, where PP is the maximum-to-minimum processing time ratio, improving upon the O(log2P)O(\log^{2}P)-competitive algorithm of Chekuri, Khanna and Zhu (STOC 2001). We also design a O(logD)O(\log D)-competitive algorithm, where DD is the maximum-to-minimum density ratio of jobs. Finally, we show how to combine these results with the result of Bansal and Dhamdhere (SODA 2003) to achieve a O(log(min(P,D,W)))O(\log(\min(P,D,W)))-competitive algorithm (where WW is the maximum-to-minimum weight ratio), without knowing P,D,WP,D,W in advance. As shown by Bansal and Chan (SODA 2009), no constant-competitive algorithm is achievable for this problem.Comment: 20 pages, 4 figure

    New Bounds for Randomized List Update in the Paid Exchange Model

    Get PDF
    We study the fundamental list update problem in the paid exchange model P^d. This cost model was introduced by Manasse, McGeoch and Sleator [M.S. Manasse et al., 1988] and Reingold, Westbrook and Sleator [N. Reingold et al., 1994]. Here the given list of items may only be rearranged using paid exchanges; each swap of two adjacent items in the list incurs a cost of d. Free exchanges of items are not allowed. The model is motivated by the fact that, when executing search operations on a data structure, key comparisons are less expensive than item swaps. We develop a new randomized online algorithm that achieves an improved competitive ratio against oblivious adversaries. For large d, the competitiveness tends to 2.2442. Technically, the analysis of the algorithm relies on a new approach of partitioning request sequences and charging expected cost. Furthermore, we devise lower bounds on the competitiveness of randomized algorithms against oblivious adversaries. No such lower bounds were known before. Specifically, we prove that no randomized online algorithm can achieve a competitive ratio smaller than 2 in the partial cost model, where an access to the i-th item in the current list incurs a cost of i-1 rather than i. All algorithms proposed in the literature attain their competitiveness in the partial cost model. Furthermore, we show that no randomized online algorithm can achieve a competitive ratio smaller than 1.8654 in the standard full cost model. Again the lower bounds hold for large d

    Online Computation with Untrusted Advice

    Get PDF
    The advice model of online computation captures the setting in which the online algorithm is given some partial information concerning the request sequence. This paradigm allows to establish tradeoffs between the amount of this additional information and the performance of the online algorithm. However, unlike real life in which advice is a recommendation that we can choose to follow or to ignore based on trustworthiness, in the current advice model, the online algorithm treats it as infallible. This means that if the advice is corrupt or, worse, if it comes from a malicious source, the algorithm may perform poorly. In this work, we study online computation in a setting in which the advice is provided by an untrusted source. Our objective is to quantify the impact of untrusted advice so as to design and analyze online algorithms that are robust and perform well even when the advice is generated in a malicious, adversarial manner. To this end, we focus on well- studied online problems such as ski rental, online bidding, bin packing, and list update. For ski-rental and online bidding, we show how to obtain algorithms that are Pareto-optimal with respect to the competitive ratios achieved; this improves upon the framework of Purohit et al. [NeurIPS 2018] in which Pareto-optimality is not necessarily guaranteed. For bin packing and list update, we give online algorithms with worst-case tradeoffs in their competitiveness, depending on whether the advice is trusted or not; this is motivated by work of Lykouris and Vassilvitskii [ICML 2018] on the paging problem, but in which the competitiveness depends on the reliability of the advice. Furthermore, we demonstrate how to prove lower bounds, within this model, on the tradeoff between the number of advice bits and the competitiveness of any online algorithm. Last, we study the effect of randomization: here we show that for ski-rental there is a randomized algorithm that Pareto-dominates any deterministic algorithm with advice of any size. We also show that a single random bit is not always inferior to a single advice bit, as it happens in the standard model
    corecore