93 research outputs found

    On Randomized Memoryless Algorithms for the Weighted kk-server Problem

    Full text link
    The weighted kk-server problem is a generalization of the kk-server problem in which the cost of moving a server of weight βi\beta_i through a distance dd is βi⋅d\beta_i\cdot d. The weighted server problem on uniform spaces models caching where caches have different write costs. We prove tight bounds on the performance of randomized memoryless algorithms for this problem on uniform metric spaces. We prove that there is an αk\alpha_k-competitive memoryless algorithm for this problem, where αk=αk−12+3αk−1+1\alpha_k=\alpha_{k-1}^2+3\alpha_{k-1}+1; α1=1\alpha_1=1. On the other hand we also prove that no randomized memoryless algorithm can have competitive ratio better than αk\alpha_k. To prove the upper bound of αk\alpha_k we develop a framework to bound from above the competitive ratio of any randomized memoryless algorithm for this problem. The key technical contribution is a method for working with potential functions defined implicitly as the solution of a linear system. The result is robust in the sense that a small change in the probabilities used by the algorithm results in a small change in the upper bound on the competitive ratio. The above result has two important implications. Firstly this yields an αk\alpha_k-competitive memoryless algorithm for the weighted kk-server problem on uniform spaces. This is the first competitive algorithm for k>2k>2 which is memoryless. Secondly, this helps us prove that the Harmonic algorithm, which chooses probabilities in inverse proportion to weights, has a competitive ratio of kαkk\alpha_k.Comment: Published at the 54th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2013

    Any-Order Online Interval Selection

    Full text link
    We consider the problem of online interval scheduling on a single machine, where intervals arrive online in an order chosen by an adversary, and the algorithm must output a set of non-conflicting intervals. Traditionally in scheduling theory, it is assumed that intervals arrive in order of increasing start times. We drop that assumption and allow for intervals to arrive in any possible order. We call this variant any-order interval selection (AOIS). We assume that some online acceptances can be revoked, but a feasible solution must always be maintained. For unweighted intervals and deterministic algorithms, this problem is unbounded. Under the assumption that there are at most kk different interval lengths, we give a simple algorithm that achieves a competitive ratio of 2k2k and show that it is optimal amongst deterministic algorithms, and a restricted class of randomized algorithms we call memoryless, contributing to an open question by Adler and Azar 2003; namely whether a randomized algorithm without access to history can achieve a constant competitive ratio. We connect our model to the problem of call control on the line, and show how the algorithms of Garay et al. 1997 can be applied to our setting, resulting in an optimal algorithm for the case of proportional weights. We also discuss the case of intervals with arbitrary weights, and show how to convert the single-length algorithm of Fung et al. 2014 into a classify and randomly select algorithm that achieves a competitive ratio of 2k. Finally, we consider the case of intervals arriving in a random order, and show that for single-lengthed instances, a one-directional algorithm (i.e. replacing intervals in one direction), is the only deterministic memoryless algorithm that can possibly benefit from random arrivals. Finally, we briefly discuss the case of intervals with arbitrary weights.Comment: 19 pages, 11 figure

    The generalized work function algorithm is competitive for the generalized 2-server problem

    Get PDF
    The generalized 2-server problem is an online optimization problem where a sequence of requests has to be served at minimal cost. Requests arrive one by one and need to be served instantly by at least one of two servers. We consider the general model where the cost function of the two servers may be different. Formally, each server moves in its own metric space and a request consists of one point in each metric space. It is served by moving one of the two servers to its request point. Requests have to be served without knowledge of the future requests. The objective is to minimize the total traveled distance. The special case where both servers move on the real line is known as the CNN-problem. We show that the generalized work function algorithm is constant competitive for the generalized 2-server problem

    The Generalized Work Function Algorithm Is Competitive for the Generalized 2-Server Problem

    Get PDF
    The generalized 2-server problem is an online optimization problem where a sequence of requests has to be served at minimal cost. Requests arrive one by one and need to be served instantly by at least one of two servers. We consider the general model where the cost function of the two servers may be different. Formally, each server moves in its own metric space and a request consists of one point in each metric space. It is served by moving one of the two servers to its request point. Requests have to be served without knowledge of future requests. The objective is to minimize the total traveled distance. The special case where both servers move on the real line is known as the CNN problem. We show that the generalized work function algorithm, WFAλ\mathrm{WFA}_{\lambda}, is constant competitive for the generalized 2-server problem. Further, we give an outline for a possible extension to k⩾2k\geqslant2 servers and discuss the applicability of our techniques and of the work function algorithm in general. We conclude with a discussion on several open problems in online optimization

    The randomized server problem

    Full text link
    In the k-server problem there are k ≥ 2 identical servers which are located at k points in a metric space M. If there is a request to a point r ∈ M, one of the servers must be moved to the request point in order to serve this request. The cost of this service is the distance between the points where the server resided before the service and after the service. A k-server algorithm A must decide which server should be moved at each step. The goal of A is to minimize the total service cost. Competitiveness makes sense as a concept when A lacks timely access to all input data. We consider the version of the problem where requests must be served online , i.e., the algorithm must decide which server to move without knowledge of future requests. Randomization is a strong tool to derive algorithms with better competitiveness; The main contributions of this thesis are: (1) An explicit detailed proof of the 2-competitiveness of the Random Slack Algorithm, which has never been given before. We note that Random Slack is a trackless algorithm. (2) An essay-style description of a new concept called the knowledge state approach, which has recently been developed by Bein, Larmore, and Reischuk. (3) We give optimally competitive randomized algorithms for 2 and 3 cache paging with few bookmarks. We note that the paging problem is a special case of the server problem, and that it is desirable to minimize the number of bookmarks, as such bookmarks pose a considerable challenge in real world applications such as cache management of pages on the world wide web; Furthermore, the thesis summarizes a number of basic results for both the randomized and the deterministic server problem

    On-line algorithms for the K-server problem and its variants.

    Get PDF
    by Chi-ming Wat.Thesis (M.Phil.)--Chinese University of Hong Kong, 1995.Includes bibliographical references (leaves 77-82).Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Performance analysis of on-line algorithms --- p.2Chapter 1.2 --- Randomized algorithms --- p.4Chapter 1.3 --- Types of adversaries --- p.5Chapter 1.4 --- Overview of the results --- p.6Chapter 2 --- The k-server problem --- p.8Chapter 2.1 --- Introduction --- p.8Chapter 2.2 --- Related Work --- p.9Chapter 2.3 --- The Evolution of Work Function Algorithm --- p.12Chapter 2.4 --- Definitions --- p.16Chapter 2.5 --- The Work Function Algorithm --- p.18Chapter 2.6 --- The Competitive Analysis --- p.20Chapter 3 --- The weighted k-server problem --- p.27Chapter 3.1 --- Introduction --- p.27Chapter 3.2 --- Related Work --- p.29Chapter 3.3 --- Fiat and Ricklin's Algorithm --- p.29Chapter 3.4 --- The Work Function Algorithm --- p.32Chapter 3.5 --- The Competitive Analysis --- p.35Chapter 4 --- The Influence of Lookahead --- p.41Chapter 4.1 --- Introduction --- p.41Chapter 4.2 --- Related Work --- p.42Chapter 4.3 --- The Role of l-lookahead --- p.43Chapter 4.4 --- The LRU Algorithm with l-lookahead --- p.45Chapter 4.5 --- The Competitive Analysis --- p.45Chapter 5 --- Space Complexity --- p.57Chapter 5.1 --- Introduction --- p.57Chapter 5.2 --- Related Work --- p.59Chapter 5.3 --- Preliminaries --- p.59Chapter 5.4 --- The TWO Algorithm --- p.60Chapter 5.5 --- Competitive Analysis --- p.61Chapter 5.6 --- Remarks --- p.69Chapter 6 --- Conclusions --- p.70Chapter 6.1 --- Summary of Our Results --- p.70Chapter 6.2 --- Recent Results --- p.71Chapter 6.2.1 --- The Adversary Models --- p.71Chapter 6.2.2 --- On-line Performance-Improvement Algorithms --- p.73Chapter A --- Proof of Lemma1 --- p.75Bibliography --- p.7

    Weighted k-Server Bounds via Combinatorial Dichotomies

    Full text link
    The weighted kk-server problem is a natural generalization of the kk-server problem where each server has a different weight. We consider the problem on uniform metrics, which corresponds to a natural generalization of paging. Our main result is a doubly exponential lower bound on the competitive ratio of any deterministic online algorithm, that essentially matches the known upper bounds for the problem and closes a large and long-standing gap. The lower bound is based on relating the weighted kk-server problem to a certain combinatorial problem and proving a Ramsey-theoretic lower bound for it. This combinatorial connection also reveals several structural properties of low cost feasible solutions to serve a sequence of requests. We use this to show that the generalized Work Function Algorithm achieves an almost optimum competitive ratio, and to obtain new refined upper bounds on the competitive ratio for the case of dd different weight classes.Comment: accepted to FOCS'1
    • …
    corecore