Learning-Augmented Weighted Paging

Abstract

We consider a natural semi-online model for weighted paging, where at any time the algorithm is given predictions, possibly with errors, about the next arrival of each page. The model is inspired by Belady's classic optimal offline algorithm for unweighted paging, and extends the recently studied model for learning-augmented paging (Lykouris and Vassilvitskii, 2018) to the weighted setting. For the case of perfect predictions, we provide an β„“\ell-competitive deterministic and an O(log⁑ℓ)O(\log \ell)-competitive randomized algorithm, where β„“\ell is the number of distinct weight classes. Both these bounds are tight, and imply an O(log⁑W)O(\log W)- and O(log⁑log⁑W)O(\log \log W)-competitive ratio, respectively, when the page weights lie between 11 and WW. Previously, it was not known how to use these predictions in the weighted setting and only bounds of kk and O(log⁑k)O(\log k) were known, where kk is the cache size. Our results also generalize to the interleaved paging setting and to the case of imperfect predictions, with the competitive ratios degrading smoothly from O(β„“)O(\ell) and O(log⁑ℓ)O(\log \ell) to O(k)O(k) and O(log⁑k)O(\log k), respectively, as the prediction error increases. Our results are based on several insights on structural properties of Belady's algorithm and the sequence of page arrival predictions, and novel potential functions that incorporate these predictions. For the case of unweighted paging, the results imply a very simple potential function based proof of the optimality of Belady's algorithm, which may be of independent interest

    Similar works