13 research outputs found

    Optimal Online Edge Coloring of Planar Graphs with Advice

    Full text link
    Using the framework of advice complexity, we study the amount of knowledge about the future that an online algorithm needs to color the edges of a graph optimally, i.e., using as few colors as possible. For graphs of maximum degree Δ\Delta, it follows from Vizing's Theorem that O(mlogΔ)O(m\log \Delta) bits of advice suffice to achieve optimality, where mm is the number of edges. We show that for graphs of bounded degeneracy (a class of graphs including e.g. trees and planar graphs), only O(m)O(m) bits of advice are needed to compute an optimal solution online, independently of how large Δ\Delta is. On the other hand, we show that Ω(m)\Omega (m) bits of advice are necessary just to achieve a competitive ratio better than that of the best deterministic online algorithm without advice. Furthermore, we consider algorithms which use a fixed number of advice bits per edge (our algorithm for graphs of bounded degeneracy belongs to this class of algorithms). We show that for bipartite graphs, any such algorithm must use at least Ω(mlogΔ)\Omega(m\log \Delta) bits of advice to achieve optimality.Comment: CIAC 201

    On the Power of Advice and Randomization for Online Bipartite Matching

    Get PDF
    While randomized online algorithms have access to a sequence of uniform random bits, deterministic online algorithms with advice have access to a sequence of advice bits, i.e., bits that are set by an all powerful oracle prior to the processing of the request sequence. Advice bits are at least as helpful as random bits, but how helpful are they? In this work, we investigate the power of advice bits and random bits for online maximum bipartite matching (MBM). The well-known Karp-Vazirani-Vazirani algorithm is an optimal randomized (11e)(1-\frac{1}{e})-competitive algorithm for \textsc{MBM} that requires access to Θ(nlogn)\Theta(n \log n) uniform random bits. We show that Ω(log(1ϵ)n)\Omega(\log(\frac{1}{\epsilon}) n) advice bits are necessary and O(1ϵ5n)O(\frac{1}{\epsilon^5} n) sufficient in order to obtain a (1ϵ)(1-\epsilon)-competitive deterministic advice algorithm. Furthermore, for a large natural class of deterministic advice algorithms, we prove that Ω(logloglogn)\Omega(\log \log \log n) advice bits are required in order to improve on the 12\frac{1}{2}-competitiveness of the best deterministic online algorithm, while it is known that O(logn)O(\log n) bits are sufficient. Last, we give a randomized online algorithm that uses cnc n random bits, for integers c1c \ge 1, and a competitive ratio that approaches 11e1-\frac{1}{e} very quickly as cc is increasing. For example if c=10c = 10, then the difference between 11e1-\frac{1}{e} and the achieved competitive ratio is less than 0.00020.0002

    Online Computation with Untrusted Advice

    Full text link
    The advice model of online computation captures a setting in which the algorithm is given some partial information concerning the request sequence. This paradigm allows to establish tradeoffs between the amount of this additional information and the performance of the online algorithm. However, if the advice is corrupt or, worse, if it comes from a malicious source, the algorithm may perform poorly. In this work, we study online computation in a setting in which the advice is provided by an untrusted source. Our objective is to quantify the impact of untrusted advice so as to design and analyze online algorithms that are robust and perform well even when the advice is generated in a malicious, adversarial manner. To this end, we focus on well-studied online problems such as ski rental, online bidding, bin packing, and list update. For ski-rental and online bidding, we show how to obtain algorithms that are Pareto-optimal with respect to the competitive ratios achieved; this improves upon the framework of Purohit et al. [NeurIPS 2018] in which Pareto-optimality is not necessarily guaranteed. For bin packing and list update, we give online algorithms with worst-case tradeoffs in their competitiveness, depending on whether the advice is trusted or not; this is motivated by work of Lykouris and Vassilvitskii [ICML 2018] on the paging problem, but in which the competitiveness depends on the reliability of the advice. Furthermore, we demonstrate how to prove lower bounds, within this model, on the tradeoff between the number of advice bits and the competitiveness of any online algorithm. Last, we study the effect of randomization: here we show that for ski-rental there is a randomized algorithm that Pareto-dominates any deterministic algorithm with advice of any size. We also show that a single random bit is not always inferior to a single advice bit, as it happens in the standard model

    Incorporating capacitative constraint to the preference-based conference scheduling via domain transformation approach

    Get PDF
    No AbstractKeywords: conference scheduling; domain transformation approach; capacity optimizatio

    The Advice Complexity of a Class of Hard Online Problems

    Get PDF
    The advice complexity of an online problem is a measure of how much knowledge of the future an online algorithm needs in order to achieve a certain competitive ratio. Using advice complexity, we define the first online complexity class, AOC. The class includes independent set, vertex cover, dominating set, and several others as complete problems. AOC-complete problems are hard, since a single wrong answer by the online algorithm can have devastating consequences. For each of these problems, we show that log(1+(c1)c1/cc)n=Θ(n/c)\log\left(1+(c-1)^{c-1}/c^{c}\right)n=\Theta (n/c) bits of advice are necessary and sufficient (up to an additive term of O(logn)O(\log n)) to achieve a competitive ratio of cc. The results are obtained by introducing a new string guessing problem related to those of Emek et al. (TCS 2011) and B\"ockenhauer et al. (TCS 2014). It turns out that this gives a powerful but easy-to-use method for providing both upper and lower bounds on the advice complexity of an entire class of online problems, the AOC-complete problems. Previous results of Halld\'orsson et al. (TCS 2002) on online independent set, in a related model, imply that the advice complexity of the problem is Θ(n/c)\Theta (n/c). Our results improve on this by providing an exact formula for the higher-order term. For online disjoint path allocation, B\"ockenhauer et al. (ISAAC 2009) gave a lower bound of Ω(n/c)\Omega (n/c) and an upper bound of O((nlogc)/c)O((n\log c)/c) on the advice complexity. We improve on the upper bound by a factor of logc\log c. For the remaining problems, no bounds on their advice complexity were previously known.Comment: Full paper to appear in Theory of Computing Systems. A preliminary version appeared in STACS 201

    On the Power of Advice and Randomization for Online Bipartite Matching

    Get PDF

    A Randomness Threshold for Online Bipartite Matching, via Lossless Online Rounding

    Full text link
    Over three decades ago, Karp, Vazirani and Vazirani (STOC'90) introduced the online bipartite matching problem. They observed that deterministic algorithms' competitive ratio for this problem is no greater than 1/21/2, and proved that randomized algorithms can do better. A natural question thus arises: \emph{how random is random}? i.e., how much randomness is needed to outperform deterministic algorithms? The \textsc{ranking} algorithm of Karp et al.~requires O~(n)\tilde{O}(n) random bits, which, ignoring polylog terms, remained unimproved. On the other hand, Pena and Borodin (TCS'19) established a lower bound of (1o(1))loglogn(1-o(1))\log\log n random bits for any 1/2+Ω(1)1/2+\Omega(1) competitive ratio. We close this doubly-exponential gap, proving that, surprisingly, the lower bound is tight. In fact, we prove a \emph{sharp threshold} of (1±o(1))loglogn(1\pm o(1))\log\log n random bits for the randomness necessary and sufficient to outperform deterministic algorithms for this problem, as well as its vertex-weighted generalization. This implies the same threshold for the advice complexity (nondeterminism) of these problems. Similar to recent breakthroughs in the online matching literature, for edge-weighted matching (Fahrbach et al.~FOCS'20) and adwords (Huang et al.~FOCS'20), our algorithms break the barrier of 1/21/2 by randomizing matching choices over two neighbors. Unlike these works, our approach does not rely on the recently-introduced OCS machinery, nor the more established randomized primal-dual method. Instead, our work revisits a highly-successful online design technique, which was nonetheless under-utilized in the area of online matching, namely (lossless) online rounding of fractional algorithms. While this technique is known to be hopeless for online matching in general, we show that it is nonetheless applicable to carefully designed fractional algorithms with additional (non-convex) constraints

    Online algorithms with advice for bin packing and scheduling problems

    No full text
    We consider the setting of online computation with advice and study the bin packing problem and a number of scheduling problems. We show that it is possible, for any of these problems, to arbitrarily approach a competitive ratio of 1 with only a constant number of bits of advice per request. For the bin packing problem, we give an online algorithm with advice that is (1+ε)-competitive and uses O(1εlog 1ε) bits of advice per request. For scheduling on m identical machines, with the objective function of any of makespan, machine covering and the minimization of the ℓp norm, p>1, we give similar results. We give online algorithms with advice which are (1+ε)-competitive ((1/(1-ε))-competitive for machine covering) and also use O(1εlog 1ε) bits of advice per request. We complement our results by giving a lower bound that shows that for any online algorithm with advice to be optimal, for any of the above scheduling problems, a non-constant number (namely, at least (1-2mn)log m, where n is the number of jobs and m is the number of machines) of bits of advice per request is needed
    corecore