482 research outputs found

    Ramsey-type theorems for metric spaces with applications to online problems

    Get PDF
    A nearly logarithmic lower bound on the randomized competitive ratio for the metrical task systems problem is presented. This implies a similar lower bound for the extensively studied k-server problem. The proof is based on Ramsey-type theorems for metric spaces, that state that every metric space contains a large subspace which is approximately a hierarchically well-separated tree (and in particular an ultrametric). These Ramsey-type theorems may be of independent interest.Comment: Fix an error in the metadata. 31 pages, 0 figures. Preliminary version in FOCS '01. To be published in J. Comput. System Sc

    Multi-Embedding of Metric Spaces

    Full text link
    Metric embedding has become a common technique in the design of algorithms. Its applicability is often dependent on how high the embedding's distortion is. For example, embedding finite metric space into trees may require linear distortion as a function of its size. Using probabilistic metric embeddings, the bound on the distortion reduces to logarithmic in the size. We make a step in the direction of bypassing the lower bound on the distortion in terms of the size of the metric. We define "multi-embeddings" of metric spaces in which a point is mapped onto a set of points, while keeping the target metric of polynomial size and preserving the distortion of paths. The distortion obtained with such multi-embeddings into ultrametrics is at most O(log Delta loglog Delta) where Delta is the aspect ratio of the metric. In particular, for expander graphs, we are able to obtain constant distortion embeddings into trees in contrast with the Omega(log n) lower bound for all previous notions of embeddings. We demonstrate the algorithmic application of the new embeddings for two optimization problems: group Steiner tree and metrical task systems

    Parametrized Metrical Task Systems

    Get PDF
    We consider parametrized versions of metrical task systems and metrical service systems, two fundamental models of online computing, where the constrained parameter is the number of possible distinct requests m. Such parametrization occurs naturally in a wide range of applications. Striking examples are certain power management problems, which are modeled as metrical task systems with m = 2. We characterize the competitive ratio in terms of the parameter m for both deterministic and randomized algorithms on hierarchically separated trees. Our findings uncover a rich and unexpected picture that differs substantially from what is known or conjectured about the unparametrized versions of these problems. For metrical task systems, we show that deterministic algorithms do not exhibit any asymptotic gain beyond one-level trees (namely, uniform metric spaces), whereas randomized algorithms do not exhibit any asymptotic gain even for one-level trees. In contrast, the special case of metrical service systems (subset chasing) behaves very differently. Both deterministic and randomized algorithms exhibit gain, for m sufficiently small compared to n, for any number of levels. Most significantly, they exhibit a large gain for uniform metric spaces and a smaller gain for two-level trees. Moreover, it turns out that in these cases (as well as in the case of metrical task systems for uniform metric spaces with m being an absolute constant), deterministic algorithms are essentially as powerful as randomized algorithms. This is surprising and runs counter to the ubiquitous intuition/conjecture that, for most problems that can be modeled as metrical task systems, the randomized competitive ratio is polylogarithmic in the deterministic competitive ratio

    Mixing predictions for online metric algorithms

    Full text link
    A major technique in learning-augmented online algorithms is combining multiple algorithms or predictors. Since the performance of each predictor may vary over time, it is desirable to use not the single best predictor as a benchmark, but rather a dynamic combination which follows different predictors at different times. We design algorithms that combine predictions and are competitive against such dynamic combinations for a wide class of online problems, namely, metrical task systems. Against the best (in hindsight) unconstrained combination of ℓ\ell predictors, we obtain a competitive ratio of O(ℓ2)O(\ell^2), and show that this is best possible. However, for a benchmark with slightly constrained number of switches between different predictors, we can get a (1+ϵ)(1+\epsilon)-competitive algorithm. Moreover, our algorithms can be adapted to access predictors in a bandit-like fashion, querying only one predictor at a time. An unexpected implication of one of our lower bounds is a new structural insight about covering formulations for the kk-server problem

    Towards a style-specific basis for computational beat tracking

    Get PDF
    Outlined in this paper are a number of sources of evidence, from psychological, ethnomusicological and engineering grounds, to suggest that current approaches to computational beat tracking are incomplete. It is contended that the degree to which cultural knowledge, that is, the specifics of style and associated learnt representational schema, underlie the human faculty of beat tracking has been severely underestimated. Difficulties in building general beat tracking solutions, which can provide both period and phase locking across a large corpus of styles, are highlighted. It is probable that no universal beat tracking model exists which does not utilise a switching model to recognise style and context prior to application

    Pure entropic regularization for metrical task systems

    Get PDF
    We show that on every n-point HST metric, there is a randomized online algorithm for metrical task systems (MTS) that is 1-competitive for service costs and O(log n)-competitive for movement costs. In general, these refined guarantees are optimal up to the implicit constant. While an O(log n)-competitive algorithm for MTS on HST metrics was developed by Bubeck et al. (SODA'19), that approach could only establish an O((log n)2)-competitive ratio when the service costs are required to be O(1)-competitive. Our algorithm can be viewed as an instantiation of online mirror descent with the regularizer derived from a multiscale conditional entropy. In fact, our algorithm satisfies a set of even more refined guarantees; we are able to exploit this property to combine it with known random embedding theorems and obtain, for any n-point metric space, a randomized algorithm that is 1-competitive for service costs and O((log n)2)-competitive for movement costs

    Mixing predictions for online metric algorithms

    Get PDF
    A major technique in learning-augmented online algorithms is combining multiple algorithms or predictors. Since the performance of each predictor may vary over time, it is desirable to use not the single best predictor as a benchmark, but rather a dynamic combination which follows different predictors at different times. We design algorithms that combine predictions and are competitive against such dynamic combinations for a wide class of online problems, namely, metrical task systems. Against the best (in hindsight) unconstrained combination of â„“ predictors, we obtain a competitive ratio of (â„“2), and show that this is best possible. However, for a benchmark with slightly constrained number of switches between different predictors, we can get a (1+)-competitive algorithm. Moreover, our algorithms can be adapted to access predictors in a bandit-like fashion, querying only one predictor at a time. An unexpected implication of one of our lower bounds is a new structural insight about covering formulations for the -server problem
    • …
    corecore