41 research outputs found

    Online Matching with Set and Concave Delays

    Get PDF
    We initiate the study of online problems with set delay, where the delay cost at any given time is an arbitrary function of the set of pending requests. In particular, we study the online min-cost perfect matching with set delay (MPMD-Set) problem, which generalises the online min-cost perfect matching with delay (MPMD) problem introduced by Emek et al. (STOC 2016). In MPMD, m requests arrive over time in a metric space of n points. When a request arrives the algorithm must choose to either match or delay the request. The goal is to create a perfect matching of all requests while minimising the sum of distances between matched requests, and the total delay costs incurred by each of the requests. In contrast to previous work we study MPMD-Set in the non-clairvoyant setting, where the algorithm does not know the future delay costs. We first show no algorithm is competitive in n or m. We then study the natural special case of size-based delay where the delay is a non-decreasing function of the number of unmatched requests. Our main result is the first non-clairvoyant algorithms for online min-cost perfect matching with size-based delay that are competitive in terms of m. In fact, these are the first non-clairvoyant algorithms for any variant of MPMD. A key technical ingredient is an analog of the symmetric difference of matchings that may be useful for other special classes of set delay. Furthermore, we prove a lower bound of ?(n) for any deterministic algorithm and ?(log n) for any randomised algorithm. These lower bounds also hold for clairvoyant algorithms. Finally, we also give an m-competitive deterministic algorithm for uniform concave delays in the clairvoyant setting

    Efficient Algorithms and Hardness Results for the Weighted k-Server Problem

    Get PDF

    Efficient Algorithms and Hardness Results for the Weighted kk-Server Problem

    Full text link
    In this paper, we study the weighted kk-server problem on the uniform metric in both the offline and online settings. We start with the offline setting. In contrast to the (unweighted) kk-server problem which has a polynomial-time solution using min-cost flows, there are strong computational lower bounds for the weighted kk-server problem, even on the uniform metric. Specifically, we show that assuming the unique games conjecture, there are no polynomial-time algorithms with a sub-polynomial approximation factor, even if we use cc-resource augmentation for c<2c < 2. Furthermore, if we consider the natural LP relaxation of the problem, then obtaining a bounded integrality gap requires us to use at least \ell resource augmentation, where \ell is the number of distinct server weights. We complement these results by obtaining a constant-approximation algorithm via LP rounding, with a resource augmentation of (2+ϵ)(2+\epsilon)\ell for any constant ϵ>0\epsilon > 0. In the online setting, an exp(k)\exp(k) lower bound is known for the competitive ratio of any randomized algorithm for the weighted kk-server problem on the uniform metric. In contrast, we show that 22\ell-resource augmentation can bring the competitive ratio down by an exponential factor to only O(2log)O(\ell^2 \log \ell). Our online algorithm uses the two-stage approach of first obtaining a fractional solution using the online primal-dual framework, and then rounding it online.Comment: This paper will appear in the proceedings of APPROX 202

    Online Metric Matching with Delay

    Get PDF
    Traditionally, an online algorithm must service a request upon its arrival. In many practical situations, one can delay the service of a request in the hope of servicing it more efficiently in the near future. As a result, the study of online algorithms with delay has recently gained considerable traction. For most online problems with delay, competitive algorithms have been developed that are independent of properties of the delay functions associated with each request. Interestingly, this is not the case for the online min-cost perfect matching with delays (MPMD) problem, introduced by Emek et al.(STOC 2016). In this thesis we show that some techniques can be modified to extend to larger classes of delay functions, without affecting the competitive ratio. In the interest of designing competitive solutions for the problem in a more general setting, we introduce the study of online problems with set delay. Here, the delay cost at any time is given by an arbitrary function of the set of pending requests, rather than the sum over individual delay functions associated with each request. In particular, we study the online min-cost perfect matching with set delay (MPMD-Set) problem, which provides a generalisation of MPMD. In contrast to previous work, the new model allows us to study the problem in the non-clairvoyant setting, i.e. where the future delay costs are unknown to the algorithm. We prove that for MPMD-Set in the most general non-clairvoyant setting, there exists no competitive algorithm. Motivated by this impossibility, we introduce a new class of delay functions called sizebased and prove that for this version of the problem, there exist both non-clairvoyant deterministic and randomised algorithms that are competitive in the number of requests. Our results reveal that the quality of an online matching depends both on the algorithm's access to information about future delay costs, and the properties of the delay function

    LIPIcs, Volume 274, ESA 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 274, ESA 2023, Complete Volum

    LIPIcs, Volume 244, ESA 2022, Complete Volume

    Get PDF
    LIPIcs, Volume 244, ESA 2022, Complete Volum

    Online Metric Allocation and Time-Varying Regularization

    Get PDF
    We introduce a general online allocation problem that connects several of the most fundamental problems in online optimization. Let be an -point metric space. Consider a resource that can be allocated in arbitrary fractions to the points of . At each time , a convex monotone cost function : [0, 1] → ℝ+ appears at some point ∈ . In response, an algorithm may change the allocation of the resource, paying movement cost as determined by the metric and service cost ( ), where is the fraction of the resource at at the end of time . For example, when the cost functions are () = , this is equivalent to randomized MTS, and when the cost functions are () = ∞·<1/, this is equivalent to fractional -server. Because of an inherent scale-freeness property of the problem, existing techniques for MTS and -server fail to achieve similar guarantees for metric allocation. To handle this, we consider a generalization of the online multiplicative update method where we decouple the rate at which a variable is updated from its value, resulting in interesting new dynamics. We use this to give an (log)-competitive algorithm for weighted star metrics. We then show how this corresponds to an extension of the online mirror descent framework to a setting where the regularizer is time-varying. Using this perspective, we further refine the guarantees of our algorithm. We also consider the case of non-convex cost functions. Using a simple ₂²-regularizer, we give tight bounds of Θ() on tree metrics, which imply deterministic and randomized competitive ratios of (2) and ( log ) respectively on arbitrary metrics

    Multiscale Entropic Regularization for MTS on General Metric Spaces

    Get PDF
    We present an O((logn)2)O((\log n)^2)-competitive algorithm for metrical task systems (MTS) on any nn-point metric space that is also 11-competitive for service costs. This matches the competitive ratio achieved by Bubeck, Cohen, Lee, and Lee (2019) and the refined competitive ratios obtained by Coester and Lee (2019). Those algorithms work by first randomly embedding the metric space into an ultrametric and then solving MTS there. In contrast, our algorithm is cast as regularized gradient descent where the regularizer is a multiscale metric entropy defined directly on the metric space. This answers an open question of Bubeck (Highlights of Algorithms, 2019).Comment: 23 pages, 1 figure, to appear in ITCS '2

    Towards the k-server conjecture: A unifying potential, pushing the frontier to the circle

    Get PDF
    The k-server conjecture, first posed by Manasse, McGeoch and Sleator in 1988, states that a k-competitive deterministic algorithm for the k-server problem exists. It is conjectured that the work function algorithm (WFA) achieves this guarantee, a multi-purpose algorithm with applications to various online problems. This has been shown for several special cases: k = 2, (k + 1)-point metrics, (k + 2)-point metrics, the line metric, weighted star metrics, and k = 3 in the Manhattan plane. The known proofs of these results are based on potential functions tied to each particular special case, thus requiring six different potential functions for the six cases. We present a single potential function proving k-competitiveness of WFA for all these cases. We also use this potential to show k-competitiveness of WFA on multiray spaces and for k = 3 on trees. While the Double Coverage algorithm was known to be k-competitive for these latter cases, it has been open for WFA. Our potential captures a type of lazy adversary and thus shows that in all settled cases, the worst-case adversary is lazy. Chrobak and Larmore conjectured in 1992 that a potential capturing the lazy adversary would resolve the k-server conjecture. To our major surprise, this is not the case, as we show (using connections to the k-taxi problem) that our potential fails for three servers on the circle. Thus, our potential highlights laziness of the adversary as a fundamental property that is shared by all settled cases but violated in general. On the one hand, this weakens our confidence in the validity of the k-server conjecture. On the other hand, if the k-server conjecture holds, then we believe it can be proved by a variant of our potential

    A Randomness Threshold for Online Bipartite Matching, via Lossless Online Rounding

    Full text link
    Over three decades ago, Karp, Vazirani and Vazirani (STOC'90) introduced the online bipartite matching problem. They observed that deterministic algorithms' competitive ratio for this problem is no greater than 1/21/2, and proved that randomized algorithms can do better. A natural question thus arises: \emph{how random is random}? i.e., how much randomness is needed to outperform deterministic algorithms? The \textsc{ranking} algorithm of Karp et al.~requires O~(n)\tilde{O}(n) random bits, which, ignoring polylog terms, remained unimproved. On the other hand, Pena and Borodin (TCS'19) established a lower bound of (1o(1))loglogn(1-o(1))\log\log n random bits for any 1/2+Ω(1)1/2+\Omega(1) competitive ratio. We close this doubly-exponential gap, proving that, surprisingly, the lower bound is tight. In fact, we prove a \emph{sharp threshold} of (1±o(1))loglogn(1\pm o(1))\log\log n random bits for the randomness necessary and sufficient to outperform deterministic algorithms for this problem, as well as its vertex-weighted generalization. This implies the same threshold for the advice complexity (nondeterminism) of these problems. Similar to recent breakthroughs in the online matching literature, for edge-weighted matching (Fahrbach et al.~FOCS'20) and adwords (Huang et al.~FOCS'20), our algorithms break the barrier of 1/21/2 by randomizing matching choices over two neighbors. Unlike these works, our approach does not rely on the recently-introduced OCS machinery, nor the more established randomized primal-dual method. Instead, our work revisits a highly-successful online design technique, which was nonetheless under-utilized in the area of online matching, namely (lossless) online rounding of fractional algorithms. While this technique is known to be hopeless for online matching in general, we show that it is nonetheless applicable to carefully designed fractional algorithms with additional (non-convex) constraints
    corecore