28,916 research outputs found

    Randomization can be as helpful as a glimpse of the future in online computation

    Get PDF
    We provide simple but surprisingly useful direct product theorems for proving lower bounds on online algorithms with a limited amount of advice about the future. As a consequence, we are able to translate decades of research on randomized online algorithms to the advice complexity model. Doing so improves significantly on the previous best advice complexity lower bounds for many online problems, or provides the first known lower bounds. For example, if nn is the number of requests, we show that: (1) A paging algorithm needs Ω(n)\Omega(n) bits of advice to achieve a competitive ratio better than Hk=Ω(logk)H_k=\Omega(\log k), where kk is the cache size. Previously, it was only known that Ω(n)\Omega(n) bits of advice were necessary to achieve a constant competitive ratio smaller than 5/45/4. (2) Every O(n1ε)O(n^{1-\varepsilon})-competitive vertex coloring algorithm must use Ω(nlogn)\Omega(n\log n) bits of advice. Previously, it was only known that Ω(nlogn)\Omega(n\log n) bits of advice were necessary to be optimal. For certain online problems, including the MTS, kk-server, paging, list update, and dynamic binary search tree problem, our results imply that randomization and sublinear advice are equally powerful (if the underlying metric space or node set is finite). This means that several long-standing open questions regarding randomized online algorithms can be equivalently stated as questions regarding online algorithms with sublinear advice. For example, we show that there exists a deterministic O(logk)O(\log k)-competitive kk-server algorithm with advice complexity o(n)o(n) if and only if there exists a randomized O(logk)O(\log k)-competitive kk-server algorithm without advice. Technically, our main direct product theorem is obtained by extending an information theoretical lower bound technique due to Emek, Fraigniaud, Korman, and Ros\'en [ICALP'09]

    On the Power of Advice and Randomization for Online Bipartite Matching

    Get PDF
    While randomized online algorithms have access to a sequence of uniform random bits, deterministic online algorithms with advice have access to a sequence of advice bits, i.e., bits that are set by an all powerful oracle prior to the processing of the request sequence. Advice bits are at least as helpful as random bits, but how helpful are they? In this work, we investigate the power of advice bits and random bits for online maximum bipartite matching (MBM). The well-known Karp-Vazirani-Vazirani algorithm is an optimal randomized (11e)(1-\frac{1}{e})-competitive algorithm for \textsc{MBM} that requires access to Θ(nlogn)\Theta(n \log n) uniform random bits. We show that Ω(log(1ϵ)n)\Omega(\log(\frac{1}{\epsilon}) n) advice bits are necessary and O(1ϵ5n)O(\frac{1}{\epsilon^5} n) sufficient in order to obtain a (1ϵ)(1-\epsilon)-competitive deterministic advice algorithm. Furthermore, for a large natural class of deterministic advice algorithms, we prove that Ω(logloglogn)\Omega(\log \log \log n) advice bits are required in order to improve on the 12\frac{1}{2}-competitiveness of the best deterministic online algorithm, while it is known that O(logn)O(\log n) bits are sufficient. Last, we give a randomized online algorithm that uses cnc n random bits, for integers c1c \ge 1, and a competitive ratio that approaches 11e1-\frac{1}{e} very quickly as cc is increasing. For example if c=10c = 10, then the difference between 11e1-\frac{1}{e} and the achieved competitive ratio is less than 0.00020.0002

    Online Bin Packing with Advice

    Get PDF
    We consider the online bin packing problem under the advice complexity model where the 'online constraint' is relaxed and an algorithm receives partial information about the future requests. We provide tight upper and lower bounds for the amount of advice an algorithm needs to achieve an optimal packing. We also introduce an algorithm that, when provided with log n + o(log n) bits of advice, achieves a competitive ratio of 3/2 for the general problem. This algorithm is simple and is expected to find real-world applications. We introduce another algorithm that receives 2n + o(n) bits of advice and achieves a competitive ratio of 4/3 + {\epsilon}. Finally, we provide a lower bound argument that implies that advice of linear size is required for an algorithm to achieve a competitive ratio better than 9/8.Comment: 19 pages, 1 figure (2 subfigures

    On the List Update Problem with Advice

    Get PDF
    We study the online list update problem under the advice model of computation. Under this model, an online algorithm receives partial information about the unknown parts of the input in the form of some bits of advice generated by a benevolent offline oracle. We show that advice of linear size is required and sufficient for a deterministic algorithm to achieve an optimal solution or even a competitive ratio better than 15/1415/14. On the other hand, we show that surprisingly two bits of advice are sufficient to break the lower bound of 22 on the competitive ratio of deterministic online algorithms and achieve a deterministic algorithm with a competitive ratio of 5/35/3. In this upper-bound argument, the bits of advice determine the algorithm with smaller cost among three classical online algorithms, TIMESTAMP and two members of the MTF2 family of algorithms. We also show that MTF2 algorithms are 2.52.5-competitive

    Randomized online computation with high probability guarantees

    Full text link
    We study the relationship between the competitive ratio and the tail distribution of randomized online minimization problems. To this end, we define a broad class of online problems that includes some of the well-studied problems like paging, k-server and metrical task systems on finite metrics, and show that for these problems it is possible to obtain, given an algorithm with constant expected competitive ratio, another algorithm that achieves the same solution quality up to an arbitrarily small constant error a with high probability; the "high probability" statement is in terms of the optimal cost. Furthermore, we show that our assumptions are tight in the sense that removing any of them allows for a counterexample to the theorem. In addition, there are examples of other problems not covered by our definition, where similar high probability results can be obtained.Comment: 20 pages, 2 figure

    Advice Complexity of the Online Induced Subgraph Problem

    Get PDF
    Several well-studied graph problems aim to select a largest (or smallest) induced subgraph with a given property of the input graph. Examples of such problems include maximum independent set, maximum planar graph, and many others. We consider these problems, where the vertices are presented online. With each vertex, the online algorithm must decide whether to include it into the constructed subgraph, based only on the subgraph induced by the vertices presented so far. We study the properties that are common to all these problems by investigating the generalized problem: for a hereditary property \pty, find some maximal induced subgraph having \pty. We study this problem from the point of view of advice complexity. Using a result from Boyar et al. [STACS 2015], we give a tight trade-off relationship stating that for inputs of length n roughly n/c bits of advice are both needed and sufficient to obtain a solution with competitive ratio c, regardless of the choice of \pty, for any c (possibly a function of n). Surprisingly, a similar result cannot be obtained for the symmetric problem: for a given cohereditary property \pty, find a minimum subgraph having \pty. We show that the advice complexity of this problem varies significantly with the choice of \pty. We also consider preemptive online model, where the decision of the algorithm is not completely irreversible. In particular, the algorithm may discard some vertices previously assigned to the constructed set, but discarded vertices cannot be reinserted into the set again. We show that, for the maximum induced subgraph problem, preemption cannot help much, giving a lower bound of Ω(n/(c2logc))\Omega(n/(c^2\log c)) bits of advice needed to obtain competitive ratio cc, where cc is any increasing function bounded by \sqrt{n/log n}. We also give a linear lower bound for c close to 1

    Online Multi-Coloring with Advice

    Full text link
    We consider the problem of online graph multi-coloring with advice. Multi-coloring is often used to model frequency allocation in cellular networks. We give several nearly tight upper and lower bounds for the most standard topologies of cellular networks, paths and hexagonal graphs. For the path, negative results trivially carry over to bipartite graphs, and our positive results are also valid for bipartite graphs. The advice given represents information that is likely to be available, studying for instance the data from earlier similar periods of time.Comment: IMADA-preprint-c
    corecore