360 research outputs found

    Improved Learning-Augmented Algorithms for the Multi-Option Ski Rental Problem via Best-Possible Competitive Analysis

    Full text link
    In this paper, we present improved learning-augmented algorithms for the multi-option ski rental problem. Learning-augmented algorithms take ML predictions as an added part of the input and incorporates these predictions in solving the given problem. Due to their unique strength that combines the power of ML predictions with rigorous performance guarantees, they have been extensively studied in the context of online optimization problems. Even though ski rental problems are one of the canonical problems in the field of online optimization, only deterministic algorithms were previously known for multi-option ski rental, with or without learning augmentation. We present the first randomized learning-augmented algorithm for this problem, surpassing previous performance guarantees given by deterministic algorithms. Our learning-augmented algorithm is based on a new, provably best-possible randomized competitive algorithm for the problem. Our results are further complemented by lower bounds for deterministic and randomized algorithms, and computational experiments evaluating our algorithms' performance improvements.Comment: 23 pages, 1 figur

    Learning Augmented Online Facility Location

    Full text link
    Following the research agenda initiated by Munoz & Vassilvitskii [1] and Lykouris & Vassilvitskii [2] on learning-augmented online algorithms for classical online optimization problems, in this work, we consider the Online Facility Location problem under this framework. In Online Facility Location (OFL), demands arrive one-by-one in a metric space and must be (irrevocably) assigned to an open facility upon arrival, without any knowledge about future demands. We present an online algorithm for OFL that exploits potentially imperfect predictions on the locations of the optimal facilities. We prove that the competitive ratio decreases smoothly from sublogarithmic in the number of demands to constant, as the error, i.e., the total distance of the predicted locations to the optimal facility locations, decreases towards zero. We complement our analysis with a matching lower bound establishing that the dependence of the algorithm's competitive ratio on the error is optimal, up to constant factors. Finally, we evaluate our algorithm on real world data and compare our learning augmented approach with the current best online algorithm for the problem

    A Novel Prediction Setup for Online Speed-Scaling

    Get PDF
    Given the rapid rise in energy demand by data centers and computing systemsin general, it is fundamental to incorporate energy considerations whendesigning (scheduling) algorithms. Machine learning can be a useful approach inpractice by predicting the future load of the system based on, for example,historical data. However, the effectiveness of such an approach highly dependson the quality of the predictions and can be quite far from optimal whenpredictions are sub-par. On the other hand, while providing a worst-caseguarantee, classical online algorithms can be pessimistic for large classes ofinputs arising in practice. This paper, in the spirit of the new area of machine learning augmentedalgorithms, attempts to obtain the best of both worlds for the classical,deadline based, online speed-scaling problem: Based on the introduction of anovel prediction setup, we develop algorithms that (i) obtain provably lowenergy-consumption in the presence of adequate predictions, and (ii) are robustagainst inadequate predictions, and (iii) are smooth, i.e., their performancegradually degrades as the prediction error increases.<br

    Canadian Traveller Problem with Predictions

    Full text link
    In this work, we consider the kk-Canadian Traveller Problem (kk-CTP) under the learning-augmented framework proposed by Lykouris & Vassilvitskii. kk-CTP is a generalization of the shortest path problem, and involves a traveller who knows the entire graph in advance and wishes to find the shortest route from a source vertex ss to a destination vertex tt, but discovers online that some edges (up to kk) are blocked once reaching them. A potentially imperfect predictor gives us the number and the locations of the blocked edges. We present a deterministic and a randomized online algorithm for the learning-augmented kk-CTP that achieve a tradeoff between consistency (quality of the solution when the prediction is correct) and robustness (quality of the solution when there are errors in the prediction). Moreover, we prove a matching lower bound for the deterministic case establishing that the tradeoff between consistency and robustness is optimal, and show a lower bound for the randomized algorithm. Finally, we prove several deterministic and randomized lower bounds on the competitive ratio of kk-CTP depending on the prediction error, and complement them, in most cases, with matching upper bounds

    Double Coverage with Machine-Learned Advice

    Get PDF
    We study the fundamental online k-server problem in a learning-augmented setting. While in the traditional online model, an algorithm has no information about the request sequence, we assume that there is given some advice (e.g. machine-learned predictions) on an algorithm's decision. There is, however, no guarantee on the quality of the prediction and it might be far from being correct. Our main result is a learning-augmented variation of the well-known Double Coverage algorithm for k-server on the line (Chrobak et al., SIDMA 1991) in which we integrate predictions as well as our trust into their quality. We give an error-dependent competitive ratio, which is a function of a user-defined confidence parameter, and which interpolates smoothly between an optimal consistency, the performance in case that all predictions are correct, and the best-possible robustness regardless of the prediction quality. When given good predictions, we improve upon known lower bounds for online algorithms without advice. We further show that our algorithm achieves for any k an almost optimal consistency-robustness tradeoff, within a class of deterministic algorithms respecting local and memoryless properties. Our algorithm outperforms a previously proposed (more general) learning-augmented algorithm. It is remarkable that the previous algorithm crucially exploits memory, whereas our algorithm is memoryless. Finally, we demonstrate in experiments the practicability and the superior performance of our algorithm on real-world data.Comment: Accepted at ITCS 202

    Learning-Augmented Online TSP on Rings, Trees, Flowers and (Almost) Everywhere Else

    Get PDF
    We study the Online Traveling Salesperson Problem (OLTSP) with predictions. In OLTSP, a sequence of initially unknown requests arrive over time at points (locations) of a metric space. The goal is, starting from a particular point of the metric space (the origin), to serve all these requests while minimizing the total time spent. The server moves with unit speed or is "waiting" (zero speed) at some location. We consider two variants: in the open variant, the goal is achieved when the last request is served. In the closed one, the server additionally has to return to the origin. We adopt a prediction model, introduced for OLTSP on the line [Gouleakis et al., 2023], in which the predictions correspond to the locations of the requests and extend it to more general metric spaces. We first propose an oracle-based algorithmic framework, inspired by previous work [Bampis et al., 2023]. This framework allows us to design online algorithms for general metric spaces that provide competitive ratio guarantees which, given perfect predictions, beat the best possible classical guarantee (consistency). Moreover, they degrade gracefully along with the increase in error (smoothness), but always within a constant factor of the best known competitive ratio in the classical case (robustness). Having reduced the problem to designing suitable efficient oracles, we describe how to achieve this for general metric spaces as well as specific metric spaces (rings, trees and flowers), the resulting algorithms being tractable in the latter case. The consistency guarantees of our algorithms are tight in almost all cases, and their smoothness guarantees only suffer a linear dependency on the error, which we show is necessary. Finally, we provide robustness guarantees improving previous results

    Smoothed Online Optimization with Unreliable Predictions

    Full text link
    We examine the problem of smoothed online optimization, where a decision maker must sequentially choose points in a normed vector space to minimize the sum of per-round, non-convex hitting costs and the costs of switching decisions between rounds. The decision maker has access to a black-box oracle, such as a machine learning model, that provides untrusted and potentially inaccurate predictions of the optimal decision in each round. The goal of the decision maker is to exploit the predictions if they are accurate, while guaranteeing performance that is not much worse than the hindsight optimal sequence of decisions, even when predictions are inaccurate. We impose the standard assumption that hitting costs are globally α\alpha-polyhedral. We propose a novel algorithm, Adaptive Online Switching (AOS), and prove that, for a large set of feasible δ>0\delta > 0, it is (1+δ)(1+\delta)-competitive if predictions are perfect, while also maintaining a uniformly bounded competitive ratio of 2O~(1/(αδ))2^{\tilde{\mathcal{O}}(1/(\alpha \delta))} even when predictions are adversarial. Further, we prove that this trade-off is necessary and nearly optimal in the sense that \emph{any} deterministic algorithm which is (1+δ)(1+\delta)-competitive if predictions are perfect must be at least 2Ω~(1/(αδ))2^{\tilde{\Omega}(1/(\alpha \delta))}-competitive when predictions are inaccurate. In fact, we observe a unique threshold-type behavior in this trade-off: if δ\delta is not in the set of feasible options, then \emph{no} algorithm is simultaneously (1+δ)(1 + \delta)-competitive if predictions are perfect and ζ\zeta-competitive when predictions are inaccurate for any ζ<\zeta < \infty. Furthermore, we discuss that memory is crucial in AOS by proving that any algorithm that does not use memory cannot benefit from predictions. We complement our theoretical results by a numerical study on a microgrid application.Comment: 38 pages, 4 figure

    Learning-Augmented Algorithms for Online TSP on the Line

    Get PDF
    We study the online Traveling Salesman Problem (TSP) on the line augmentedwith machine-learned predictions. In the classical problem, there is a streamof requests released over time along the real line. The goal is to minimize themakespan of the algorithm. We distinguish between the open variant and theclosed one, in which we additionally require the algorithm to return to theorigin after serving all requests. The state of the art is a 1.641.64-competitivealgorithm and a 2.042.04-competitive algorithm for the closed and open variants,respectively \cite{Bjelde:1.64}. In both cases, a tight lower bound is known\cite{Ausiello:1.75, Bjelde:1.64}. In both variants, our primary prediction model involves predicted positionsof the requests. We introduce algorithms that (i) obtain a tight 1.5competitive ratio for the closed variant and a 1.66 competitive ratio for theopen variant in the case of perfect predictions, (ii) are robust againstunbounded prediction error, and (iii) are smooth, i.e., their performancedegrades gracefully as the prediction error increases. Moreover, we further investigate the learning-augmented setting in the openvariant by additionally considering a prediction for the last request served bythe optimal offline algorithm. Our algorithm for this enhanced setting obtainsa 1.33 competitive ratio with perfect predictions while also being smooth androbust, beating the lower bound of 1.44 we show for our original predictionsetting for the open variant. Also, we provide a lower bound of 1.25 for thisenhanced setting.<br

    Proportionally Fair Online Allocation of Public Goods with Predictions

    Full text link
    We design online algorithms for the fair allocation of public goods to a set of NN agents over a sequence of TT rounds and focus on improving their performance using predictions. In the basic model, a public good arrives in each round, the algorithm learns every agent's value for the good, and must irrevocably decide the amount of investment in the good without exceeding a total budget of BB across all rounds. The algorithm can utilize (potentially inaccurate) predictions of each agent's total value for all the goods to arrive. We measure the performance of the algorithm using a proportional fairness objective, which informally demands that every group of agents be rewarded in proportion to its size and the cohesiveness of its preferences. In the special case of binary agent preferences and a unit budget, we show that O(logN)O(\log N) proportional fairness can be achieved without using any predictions, and that this is optimal even if perfectly accurate predictions were available. However, for general preferences and budget no algorithm can achieve better than Θ(T/B)\Theta(T/B) proportional fairness without predictions. We show that algorithms with (reasonably accurate) predictions can do much better, achieving Θ(log(T/B))\Theta(\log (T/B)) proportional fairness. We also extend this result to a general model in which a batch of LL public goods arrive in each round and achieve O(log(min(N,L)T/B))O(\log (\min(N,L) \cdot T/B)) proportional fairness. Our exact bounds are parametrized as a function of the error in the predictions and the performance degrades gracefully with increasing errors
    corecore