458 research outputs found

    Randomization can be as helpful as a glimpse of the future in online computation

    Get PDF
    We provide simple but surprisingly useful direct product theorems for proving lower bounds on online algorithms with a limited amount of advice about the future. As a consequence, we are able to translate decades of research on randomized online algorithms to the advice complexity model. Doing so improves significantly on the previous best advice complexity lower bounds for many online problems, or provides the first known lower bounds. For example, if nn is the number of requests, we show that: (1) A paging algorithm needs Ω(n)\Omega(n) bits of advice to achieve a competitive ratio better than Hk=Ω(log⁥k)H_k=\Omega(\log k), where kk is the cache size. Previously, it was only known that Ω(n)\Omega(n) bits of advice were necessary to achieve a constant competitive ratio smaller than 5/45/4. (2) Every O(n1−Δ)O(n^{1-\varepsilon})-competitive vertex coloring algorithm must use Ω(nlog⁥n)\Omega(n\log n) bits of advice. Previously, it was only known that Ω(nlog⁥n)\Omega(n\log n) bits of advice were necessary to be optimal. For certain online problems, including the MTS, kk-server, paging, list update, and dynamic binary search tree problem, our results imply that randomization and sublinear advice are equally powerful (if the underlying metric space or node set is finite). This means that several long-standing open questions regarding randomized online algorithms can be equivalently stated as questions regarding online algorithms with sublinear advice. For example, we show that there exists a deterministic O(log⁥k)O(\log k)-competitive kk-server algorithm with advice complexity o(n)o(n) if and only if there exists a randomized O(log⁥k)O(\log k)-competitive kk-server algorithm without advice. Technically, our main direct product theorem is obtained by extending an information theoretical lower bound technique due to Emek, Fraigniaud, Korman, and Ros\'en [ICALP'09]

    Online Computation with Untrusted Advice

    Get PDF
    The advice model of online computation captures the setting in which the online algorithm is given some partial information concerning the request sequence. This paradigm allows to establish tradeoffs between the amount of this additional information and the performance of the online algorithm. However, unlike real life in which advice is a recommendation that we can choose to follow or to ignore based on trustworthiness, in the current advice model, the online algorithm treats it as infallible. This means that if the advice is corrupt or, worse, if it comes from a malicious source, the algorithm may perform poorly. In this work, we study online computation in a setting in which the advice is provided by an untrusted source. Our objective is to quantify the impact of untrusted advice so as to design and analyze online algorithms that are robust and perform well even when the advice is generated in a malicious, adversarial manner. To this end, we focus on well- studied online problems such as ski rental, online bidding, bin packing, and list update. For ski-rental and online bidding, we show how to obtain algorithms that are Pareto-optimal with respect to the competitive ratios achieved; this improves upon the framework of Purohit et al. [NeurIPS 2018] in which Pareto-optimality is not necessarily guaranteed. For bin packing and list update, we give online algorithms with worst-case tradeoffs in their competitiveness, depending on whether the advice is trusted or not; this is motivated by work of Lykouris and Vassilvitskii [ICML 2018] on the paging problem, but in which the competitiveness depends on the reliability of the advice. Furthermore, we demonstrate how to prove lower bounds, within this model, on the tradeoff between the number of advice bits and the competitiveness of any online algorithm. Last, we study the effect of randomization: here we show that for ski-rental there is a randomized algorithm that Pareto-dominates any deterministic algorithm with advice of any size. We also show that a single random bit is not always inferior to a single advice bit, as it happens in the standard model

    On the Power of Advice and Randomization for Online Bipartite Matching

    Get PDF
    While randomized online algorithms have access to a sequence of uniform random bits, deterministic online algorithms with advice have access to a sequence of advice bits, i.e., bits that are set by an all powerful oracle prior to the processing of the request sequence. Advice bits are at least as helpful as random bits, but how helpful are they? In this work, we investigate the power of advice bits and random bits for online maximum bipartite matching (MBM). The well-known Karp-Vazirani-Vazirani algorithm is an optimal randomized (1−1e)(1-\frac{1}{e})-competitive algorithm for \textsc{MBM} that requires access to Θ(nlog⁥n)\Theta(n \log n) uniform random bits. We show that Ω(log⁥(1Ï”)n)\Omega(\log(\frac{1}{\epsilon}) n) advice bits are necessary and O(1Ï”5n)O(\frac{1}{\epsilon^5} n) sufficient in order to obtain a (1−ϔ)(1-\epsilon)-competitive deterministic advice algorithm. Furthermore, for a large natural class of deterministic advice algorithms, we prove that Ω(log⁥log⁥log⁥n)\Omega(\log \log \log n) advice bits are required in order to improve on the 12\frac{1}{2}-competitiveness of the best deterministic online algorithm, while it is known that O(log⁥n)O(\log n) bits are sufficient. Last, we give a randomized online algorithm that uses cnc n random bits, for integers c≄1c \ge 1, and a competitive ratio that approaches 1−1e1-\frac{1}{e} very quickly as cc is increasing. For example if c=10c = 10, then the difference between 1−1e1-\frac{1}{e} and the achieved competitive ratio is less than 0.00020.0002

    Advice Complexity of the Online Induced Subgraph Problem

    Get PDF
    Several well-studied graph problems aim to select a largest (or smallest) induced subgraph with a given property of the input graph. Examples of such problems include maximum independent set, maximum planar graph, and many others. We consider these problems, where the vertices are presented online. With each vertex, the online algorithm must decide whether to include it into the constructed subgraph, based only on the subgraph induced by the vertices presented so far. We study the properties that are common to all these problems by investigating the generalized problem: for a hereditary property \pty, find some maximal induced subgraph having \pty. We study this problem from the point of view of advice complexity. Using a result from Boyar et al. [STACS 2015], we give a tight trade-off relationship stating that for inputs of length n roughly n/c bits of advice are both needed and sufficient to obtain a solution with competitive ratio c, regardless of the choice of \pty, for any c (possibly a function of n). Surprisingly, a similar result cannot be obtained for the symmetric problem: for a given cohereditary property \pty, find a minimum subgraph having \pty. We show that the advice complexity of this problem varies significantly with the choice of \pty. We also consider preemptive online model, where the decision of the algorithm is not completely irreversible. In particular, the algorithm may discard some vertices previously assigned to the constructed set, but discarded vertices cannot be reinserted into the set again. We show that, for the maximum induced subgraph problem, preemption cannot help much, giving a lower bound of Ω(n/(c2log⁥c))\Omega(n/(c^2\log c)) bits of advice needed to obtain competitive ratio cc, where cc is any increasing function bounded by \sqrt{n/log n}. We also give a linear lower bound for c close to 1

    Online Algorithms with Advice for the -search Problem

    Get PDF
    In the online search problem, a seller seeks to find the maximum price from a sequence of prices p1, p2,
, pn that is revealed in a piece-wise manner. The bound for all prices is well known in advance with m ≀ pÎŻ ≀ M. In the online k-search problem, the seller seeks to find the k maximum out of the n prices. In this paper, we present a tight bound of [Formula Presented] on the advice complexity of optimal online algorithms for online k-search. We also provide online algorithms with advice that use less than the required number of bits and compute the performance guarantee. Although it is natural to expect improvement due to the additional power of advice, we are interested to identify the relationship of additional information with respect to the improvement. We show that with 1 bit of advice, we can already surpass the quality of the best possible deterministic algorithm for online 2-search. We also provide a set of online algorithms, ALGÎŻ, that utilizes [Formula Presented] advice bits with a competitive ratio of (formula presented). We show that increasing the amount of advice improves the solution quality of the algorithm. Moreover, we compare the power of advice and randomization. We show that for some identified minimum number of advice bits, the lower bound on the competitive ratio of online algorithms with advice is better than any deterministic and randomized algorithm for online k-search

    Any-Order Online Interval Selection

    Full text link
    We consider the problem of online interval scheduling on a single machine, where intervals arrive online in an order chosen by an adversary, and the algorithm must output a set of non-conflicting intervals. Traditionally in scheduling theory, it is assumed that intervals arrive in order of increasing start times. We drop that assumption and allow for intervals to arrive in any possible order. We call this variant any-order interval selection (AOIS). We assume that some online acceptances can be revoked, but a feasible solution must always be maintained. For unweighted intervals and deterministic algorithms, this problem is unbounded. Under the assumption that there are at most kk different interval lengths, we give a simple algorithm that achieves a competitive ratio of 2k2k and show that it is optimal amongst deterministic algorithms, and a restricted class of randomized algorithms we call memoryless, contributing to an open question by Adler and Azar 2003; namely whether a randomized algorithm without access to history can achieve a constant competitive ratio. We connect our model to the problem of call control on the line, and show how the algorithms of Garay et al. 1997 can be applied to our setting, resulting in an optimal algorithm for the case of proportional weights. We also discuss the case of intervals with arbitrary weights, and show how to convert the single-length algorithm of Fung et al. 2014 into a classify and randomly select algorithm that achieves a competitive ratio of 2k. Finally, we consider the case of intervals arriving in a random order, and show that for single-lengthed instances, a one-directional algorithm (i.e. replacing intervals in one direction), is the only deterministic memoryless algorithm that can possibly benefit from random arrivals. Finally, we briefly discuss the case of intervals with arbitrary weights.Comment: 19 pages, 11 figure

    Online Computation with Untrusted Advice

    Full text link
    The advice model of online computation captures a setting in which the algorithm is given some partial information concerning the request sequence. This paradigm allows to establish tradeoffs between the amount of this additional information and the performance of the online algorithm. However, if the advice is corrupt or, worse, if it comes from a malicious source, the algorithm may perform poorly. In this work, we study online computation in a setting in which the advice is provided by an untrusted source. Our objective is to quantify the impact of untrusted advice so as to design and analyze online algorithms that are robust and perform well even when the advice is generated in a malicious, adversarial manner. To this end, we focus on well-studied online problems such as ski rental, online bidding, bin packing, and list update. For ski-rental and online bidding, we show how to obtain algorithms that are Pareto-optimal with respect to the competitive ratios achieved; this improves upon the framework of Purohit et al. [NeurIPS 2018] in which Pareto-optimality is not necessarily guaranteed. For bin packing and list update, we give online algorithms with worst-case tradeoffs in their competitiveness, depending on whether the advice is trusted or not; this is motivated by work of Lykouris and Vassilvitskii [ICML 2018] on the paging problem, but in which the competitiveness depends on the reliability of the advice. Furthermore, we demonstrate how to prove lower bounds, within this model, on the tradeoff between the number of advice bits and the competitiveness of any online algorithm. Last, we study the effect of randomization: here we show that for ski-rental there is a randomized algorithm that Pareto-dominates any deterministic algorithm with advice of any size. We also show that a single random bit is not always inferior to a single advice bit, as it happens in the standard model

    Advice Complexity of the Online Induced Subgraph Problem

    Get PDF
    Several well-studied graph problems aim to select a largest (or smallest) induced subgraph with a given property of the input graph. Examples include maximum independent set, maximum planar graph, maximum clique, minimum feedback vertex set, and many others. In online versions of these problems, the vertices of the graph are presented in an adversarial order, and with each vertex, the online algorithm must irreversibly decide whether to include it into the constructed subgraph, based only on the subgraph induced by the vertices presented so far. We study the properties that are common to all these problems by investigating a generalized problem: for an arbitrary but fixed hereditary property pi, find some maximal induced subgraph having pi. We investigate this problem from the point of view of advice complexity, i.e., we ask how some additional information about the yet unrevealed parts of the input can influence the solution quality. We evaluate the information in a quantitative way by considering the best possible advice of given size that describes the unknown input. Using a result from Boyar et al. [STACS 2015, LIPIcs 30], we give a tight trade-off relationship stating that, for inputs of length n, roughly n/c bits of advice are both needed and sufficient to obtain a solution with competitive ratio c, regardless of the choice of pi, for any c (possibly a function of n). This complements the results from Bartal et al. [SIAM Journal on Computing 36(2), 2006] stating that, without any advice, even a randomized algorithm cannot achieve a competitive ratio better than Omega(n^{1-log_{4}3-o(1)}). Surprisingly, for a given cohereditary property pi and the objective to find a minimum subgraph having pi, the advice complexity varies significantly with the choice of pi. We also consider a preemptive online model, inspired by some applications mainly in networking and scheduling, where the decision of the algorithm is not completely irreversible. In particular, the algorithm may discard some vertices previously assigned to the constructed set, but discarded vertices cannot be reinserted into the set. We show that, for the maximum induced subgraph problem, preemption does not significantly help by giving a lower bound of Omega(n/(c^2log c)) on the bits of advice that are needed to obtain competitive ratio c, where c is any increasing function bounded from above by sqrt(n/log n). We also give a linear lower bound for c close to 1
    • 

    corecore