10 research outputs found

    Online Metric Allocation and Time-Varying Regularization

    Get PDF
    We introduce a general online allocation problem that connects several of the most fundamental problems in online optimization. Let be an -point metric space. Consider a resource that can be allocated in arbitrary fractions to the points of . At each time , a convex monotone cost function : [0, 1] → ℝ+ appears at some point ∈ . In response, an algorithm may change the allocation of the resource, paying movement cost as determined by the metric and service cost ( ), where is the fraction of the resource at at the end of time . For example, when the cost functions are () = , this is equivalent to randomized MTS, and when the cost functions are () = ∞·<1/, this is equivalent to fractional -server. Because of an inherent scale-freeness property of the problem, existing techniques for MTS and -server fail to achieve similar guarantees for metric allocation. To handle this, we consider a generalization of the online multiplicative update method where we decouple the rate at which a variable is updated from its value, resulting in interesting new dynamics. We use this to give an (log)-competitive algorithm for weighted star metrics. We then show how this corresponds to an extension of the online mirror descent framework to a setting where the regularizer is time-varying. Using this perspective, we further refine the guarantees of our algorithm. We also consider the case of non-convex cost functions. Using a simple ₂ÂČ-regularizer, we give tight bounds of Θ() on tree metrics, which imply deterministic and randomized competitive ratios of (2) and ( log ) respectively on arbitrary metrics

    Parametrized Metrical Task Systems

    Get PDF
    We consider parametrized versions of metrical task systems and metrical service systems, two fundamental models of online computing, where the constrained parameter is the number of possible distinct requests m. Such parametrization occurs naturally in a wide range of applications. Striking examples are certain power management problems, which are modeled as metrical task systems with m = 2. We characterize the competitive ratio in terms of the parameter m for both deterministic and randomized algorithms on hierarchically separated trees. Our findings uncover a rich and unexpected picture that differs substantially from what is known or conjectured about the unparametrized versions of these problems. For metrical task systems, we show that deterministic algorithms do not exhibit any asymptotic gain beyond one-level trees (namely, uniform metric spaces), whereas randomized algorithms do not exhibit any asymptotic gain even for one-level trees. In contrast, the special case of metrical service systems (subset chasing) behaves very differently. Both deterministic and randomized algorithms exhibit gain, for m sufficiently small compared to n, for any number of levels. Most significantly, they exhibit a large gain for uniform metric spaces and a smaller gain for two-level trees. Moreover, it turns out that in these cases (as well as in the case of metrical task systems for uniform metric spaces with m being an absolute constant), deterministic algorithms are essentially as powerful as randomized algorithms. This is surprising and runs counter to the ubiquitous intuition/conjecture that, for most problems that can be modeled as metrical task systems, the randomized competitive ratio is polylogarithmic in the deterministic competitive ratio

    Multiscale Entropic Regularization for MTS on General Metric Spaces

    Get PDF
    We present an O((log⁥n)2)O((\log n)^2)-competitive algorithm for metrical task systems (MTS) on any nn-point metric space that is also 11-competitive for service costs. This matches the competitive ratio achieved by Bubeck, Cohen, Lee, and Lee (2019) and the refined competitive ratios obtained by Coester and Lee (2019). Those algorithms work by first randomly embedding the metric space into an ultrametric and then solving MTS there. In contrast, our algorithm is cast as regularized gradient descent where the regularizer is a multiscale metric entropy defined directly on the metric space. This answers an open question of Bubeck (Highlights of Algorithms, 2019).Comment: 23 pages, 1 figure, to appear in ITCS '2

    Ramsey-type theorems for metric spaces with applications to online problems

    Get PDF
    A nearly logarithmic lower bound on the randomized competitive ratio for the metrical task systems problem is presented. This implies a similar lower bound for the extensively studied k-server problem. The proof is based on Ramsey-type theorems for metric spaces, that state that every metric space contains a large subspace which is approximately a hierarchically well-separated tree (and in particular an ultrametric). These Ramsey-type theorems may be of independent interest.Comment: Fix an error in the metadata. 31 pages, 0 figures. Preliminary version in FOCS '01. To be published in J. Comput. System Sc

    LIPIcs, Volume 244, ESA 2022, Complete Volume

    Get PDF
    LIPIcs, Volume 244, ESA 2022, Complete Volum
    corecore