55,722 research outputs found

    Randomization can be as helpful as a glimpse of the future in online computation

    Get PDF
    We provide simple but surprisingly useful direct product theorems for proving lower bounds on online algorithms with a limited amount of advice about the future. As a consequence, we are able to translate decades of research on randomized online algorithms to the advice complexity model. Doing so improves significantly on the previous best advice complexity lower bounds for many online problems, or provides the first known lower bounds. For example, if nn is the number of requests, we show that: (1) A paging algorithm needs Ω(n)\Omega(n) bits of advice to achieve a competitive ratio better than Hk=Ω(logk)H_k=\Omega(\log k), where kk is the cache size. Previously, it was only known that Ω(n)\Omega(n) bits of advice were necessary to achieve a constant competitive ratio smaller than 5/45/4. (2) Every O(n1ε)O(n^{1-\varepsilon})-competitive vertex coloring algorithm must use Ω(nlogn)\Omega(n\log n) bits of advice. Previously, it was only known that Ω(nlogn)\Omega(n\log n) bits of advice were necessary to be optimal. For certain online problems, including the MTS, kk-server, paging, list update, and dynamic binary search tree problem, our results imply that randomization and sublinear advice are equally powerful (if the underlying metric space or node set is finite). This means that several long-standing open questions regarding randomized online algorithms can be equivalently stated as questions regarding online algorithms with sublinear advice. For example, we show that there exists a deterministic O(logk)O(\log k)-competitive kk-server algorithm with advice complexity o(n)o(n) if and only if there exists a randomized O(logk)O(\log k)-competitive kk-server algorithm without advice. Technically, our main direct product theorem is obtained by extending an information theoretical lower bound technique due to Emek, Fraigniaud, Korman, and Ros\'en [ICALP'09]

    Online Bin Packing with Advice

    Get PDF
    We consider the online bin packing problem under the advice complexity model where the 'online constraint' is relaxed and an algorithm receives partial information about the future requests. We provide tight upper and lower bounds for the amount of advice an algorithm needs to achieve an optimal packing. We also introduce an algorithm that, when provided with log n + o(log n) bits of advice, achieves a competitive ratio of 3/2 for the general problem. This algorithm is simple and is expected to find real-world applications. We introduce another algorithm that receives 2n + o(n) bits of advice and achieves a competitive ratio of 4/3 + {\epsilon}. Finally, we provide a lower bound argument that implies that advice of linear size is required for an algorithm to achieve a competitive ratio better than 9/8.Comment: 19 pages, 1 figure (2 subfigures

    On the Power of Advice and Randomization for Online Bipartite Matching

    Get PDF
    While randomized online algorithms have access to a sequence of uniform random bits, deterministic online algorithms with advice have access to a sequence of advice bits, i.e., bits that are set by an all powerful oracle prior to the processing of the request sequence. Advice bits are at least as helpful as random bits, but how helpful are they? In this work, we investigate the power of advice bits and random bits for online maximum bipartite matching (MBM). The well-known Karp-Vazirani-Vazirani algorithm is an optimal randomized (11e)(1-\frac{1}{e})-competitive algorithm for \textsc{MBM} that requires access to Θ(nlogn)\Theta(n \log n) uniform random bits. We show that Ω(log(1ϵ)n)\Omega(\log(\frac{1}{\epsilon}) n) advice bits are necessary and O(1ϵ5n)O(\frac{1}{\epsilon^5} n) sufficient in order to obtain a (1ϵ)(1-\epsilon)-competitive deterministic advice algorithm. Furthermore, for a large natural class of deterministic advice algorithms, we prove that Ω(logloglogn)\Omega(\log \log \log n) advice bits are required in order to improve on the 12\frac{1}{2}-competitiveness of the best deterministic online algorithm, while it is known that O(logn)O(\log n) bits are sufficient. Last, we give a randomized online algorithm that uses cnc n random bits, for integers c1c \ge 1, and a competitive ratio that approaches 11e1-\frac{1}{e} very quickly as cc is increasing. For example if c=10c = 10, then the difference between 11e1-\frac{1}{e} and the achieved competitive ratio is less than 0.00020.0002

    On the List Update Problem with Advice

    Get PDF
    We study the online list update problem under the advice model of computation. Under this model, an online algorithm receives partial information about the unknown parts of the input in the form of some bits of advice generated by a benevolent offline oracle. We show that advice of linear size is required and sufficient for a deterministic algorithm to achieve an optimal solution or even a competitive ratio better than 15/1415/14. On the other hand, we show that surprisingly two bits of advice are sufficient to break the lower bound of 22 on the competitive ratio of deterministic online algorithms and achieve a deterministic algorithm with a competitive ratio of 5/35/3. In this upper-bound argument, the bits of advice determine the algorithm with smaller cost among three classical online algorithms, TIMESTAMP and two members of the MTF2 family of algorithms. We also show that MTF2 algorithms are 2.52.5-competitive
    corecore