26 research outputs found

    Learning Augmented Online Facility Location

    Full text link
    Following the research agenda initiated by Munoz & Vassilvitskii [1] and Lykouris & Vassilvitskii [2] on learning-augmented online algorithms for classical online optimization problems, in this work, we consider the Online Facility Location problem under this framework. In Online Facility Location (OFL), demands arrive one-by-one in a metric space and must be (irrevocably) assigned to an open facility upon arrival, without any knowledge about future demands. We present an online algorithm for OFL that exploits potentially imperfect predictions on the locations of the optimal facilities. We prove that the competitive ratio decreases smoothly from sublogarithmic in the number of demands to constant, as the error, i.e., the total distance of the predicted locations to the optimal facility locations, decreases towards zero. We complement our analysis with a matching lower bound establishing that the dependence of the algorithm's competitive ratio on the error is optimal, up to constant factors. Finally, we evaluate our algorithm on real world data and compare our learning augmented approach with the current best online algorithm for the problem

    A Novel Prediction Setup for Online Speed-Scaling

    Get PDF
    Given the rapid rise in energy demand by data centers and computing systemsin general, it is fundamental to incorporate energy considerations whendesigning (scheduling) algorithms. Machine learning can be a useful approach inpractice by predicting the future load of the system based on, for example,historical data. However, the effectiveness of such an approach highly dependson the quality of the predictions and can be quite far from optimal whenpredictions are sub-par. On the other hand, while providing a worst-caseguarantee, classical online algorithms can be pessimistic for large classes ofinputs arising in practice. This paper, in the spirit of the new area of machine learning augmentedalgorithms, attempts to obtain the best of both worlds for the classical,deadline based, online speed-scaling problem: Based on the introduction of anovel prediction setup, we develop algorithms that (i) obtain provably lowenergy-consumption in the presence of adequate predictions, and (ii) are robustagainst inadequate predictions, and (iii) are smooth, i.e., their performancegradually degrades as the prediction error increases.<br

    Contract Scheduling With Predictions

    Full text link
    Contract scheduling is a general technique that allows to design a system with interruptible capabilities, given an algorithm that is not necessarily interruptible. Previous work on this topic has largely assumed that the interruption is a worst-case deadline that is unknown to the scheduler. In this work, we study the setting in which there is a potentially erroneous prediction concerning the interruption. Specifically, we consider the setting in which the prediction describes the time that the interruption occurs, as well as the setting in which the prediction is obtained as a response to a single or multiple binary queries. For both settings, we investigate tradeoffs between the robustness (i.e., the worst-case performance assuming adversarial prediction) and the consistency (i.e, the performance assuming that the prediction is error-free), both from the side of positive and negative results

    Mixing Predictions for Online Metric Algorithms

    Get PDF
    A major technique in learning-augmented online algorithms is combining multiple algorithms or predictors. Since the performance of each predictor may vary over time, it is desirable to use not the single best predictor as a benchmark, but rather a dynamic combination which follows different predictors at different times. We design algorithms that combine predictions and are competitive against such dynamic combinations for a wide class of online problems, namely, metrical task systems. Against the best (in hindsight) unconstrained combination of ℓ predictors, we obtain a competitive ratio of O(ℓ2), and show that this is best possible. However, for a benchmark with slightly constrained number of switches between different predictors, we can get a (1 + ε)- competitive algorithm. Moreover, our algorithms can be adapted to access predictors in a banditlike fashion, querying only one predictor at a time. An unexpected implication of one of our lower bounds is a new structural insight about covering formulations for the k-server problem

    Mixing predictions for online metric algorithms

    Get PDF
    A major technique in learning-augmented online algorithms is combining multiple algorithms or predictors. Since the performance of each predictor may vary over time, it is desirable to use not the single best predictor as a benchmark, but rather a dynamic combination which follows different predictors at different times. We design algorithms that combine predictions and are competitive against such dynamic combinations for a wide class of online problems, namely, metrical task systems. Against the best (in hindsight) unconstrained combination of ℓ predictors, we obtain a competitive ratio of (ℓ2), and show that this is best possible. However, for a benchmark with slightly constrained number of switches between different predictors, we can get a (1+)-competitive algorithm. Moreover, our algorithms can be adapted to access predictors in a bandit-like fashion, querying only one predictor at a time. An unexpected implication of one of our lower bounds is a new structural insight about covering formulations for the -server problem
    corecore