52 research outputs found

    Derandomization of Online Assignment Algorithms for Dynamic Graphs

    Full text link
    This paper analyzes different online algorithms for the problem of assigning weights to edges in a fully-connected bipartite graph that minimizes the overall cost while satisfying constraints. Edges in this graph may disappear and reappear over time. Performance of these algorithms is measured using simulations. This paper also attempts to derandomize the randomized online algorithm for this problem

    Online Assignment Algorithms for Dynamic Bipartite Graphs

    Full text link
    This paper analyzes the problem of assigning weights to edges incrementally in a dynamic complete bipartite graph consisting of producer and consumer nodes. The objective is to minimize the overall cost while satisfying certain constraints. The cost and constraints are functions of attributes of the edges, nodes and online service requests. Novelty of this work is that it models real-time distributed resource allocation using an approach to solve this theoretical problem. This paper studies variants of this assignment problem where the edges, producers and consumers can disappear and reappear or their attributes can change over time. Primal-Dual algorithms are used for solving these problems and their competitive ratios are evaluated

    The K-Server Dual and Loose Competitiveness for Paging

    Full text link
    This paper has two results. The first is based on the surprising observation that the well-known ``least-recently-used'' paging algorithm and the ``balance'' algorithm for weighted caching are linear-programming primal-dual algorithms. This observation leads to a strategy (called ``Greedy-Dual'') that generalizes them both and has an optimal performance guarantee for weighted caching. For the second result, the paper presents empirical studies of paging algorithms, documenting that in practice, on ``typical'' cache sizes and sequences, the performance of paging strategies are much better than their worst-case analyses in the standard model suggest. The paper then presents theoretical results that support and explain this. For example: on any input sequence, with almost all cache sizes, either the performance guarantee of least-recently-used is O(log k) or the fault rate (in an absolute sense) is insignificant. Both of these results are strengthened and generalized in``On-line File Caching'' (1998).Comment: conference version: "On-Line Caching as Cache Size Varies", SODA (1991

    Fully dynamic all-pairs shortest paths with worst-case update-time revisited

    Full text link
    We revisit the classic problem of dynamically maintaining shortest paths between all pairs of nodes of a directed weighted graph. The allowed updates are insertions and deletions of nodes and their incident edges. We give worst-case guarantees on the time needed to process a single update (in contrast to related results, the update time is not amortized over a sequence of updates). Our main result is a simple randomized algorithm that for any parameter c>1c>1 has a worst-case update time of O(cn2+2/3log4/3n)O(cn^{2+2/3} \log^{4/3}{n}) and answers distance queries correctly with probability 11/nc1-1/n^c, against an adaptive online adversary if the graph contains no negative cycle. The best deterministic algorithm is by Thorup [STOC 2005] with a worst-case update time of O~(n2+3/4)\tilde O(n^{2+3/4}) and assumes non-negative weights. This is the first improvement for this problem for more than a decade. Conceptually, our algorithm shows that randomization along with a more direct approach can provide better bounds.Comment: To be presented at the Symposium on Discrete Algorithms (SODA) 201

    On Randomized Memoryless Algorithms for the Weighted kk-server Problem

    Full text link
    The weighted kk-server problem is a generalization of the kk-server problem in which the cost of moving a server of weight βi\beta_i through a distance dd is βid\beta_i\cdot d. The weighted server problem on uniform spaces models caching where caches have different write costs. We prove tight bounds on the performance of randomized memoryless algorithms for this problem on uniform metric spaces. We prove that there is an αk\alpha_k-competitive memoryless algorithm for this problem, where αk=αk12+3αk1+1\alpha_k=\alpha_{k-1}^2+3\alpha_{k-1}+1; α1=1\alpha_1=1. On the other hand we also prove that no randomized memoryless algorithm can have competitive ratio better than αk\alpha_k. To prove the upper bound of αk\alpha_k we develop a framework to bound from above the competitive ratio of any randomized memoryless algorithm for this problem. The key technical contribution is a method for working with potential functions defined implicitly as the solution of a linear system. The result is robust in the sense that a small change in the probabilities used by the algorithm results in a small change in the upper bound on the competitive ratio. The above result has two important implications. Firstly this yields an αk\alpha_k-competitive memoryless algorithm for the weighted kk-server problem on uniform spaces. This is the first competitive algorithm for k>2k>2 which is memoryless. Secondly, this helps us prove that the Harmonic algorithm, which chooses probabilities in inverse proportion to weights, has a competitive ratio of kαkk\alpha_k.Comment: Published at the 54th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2013
    corecore