In evaluating an algorithm, worst-case analysis can be overly pessimistic.
Average-case analysis can be overly optimistic. An intermediate approach is to
show that an algorithm does well on a broad class of input distributions.
Koutsoupias and Papadimitriou recently analyzed the least-recently-used (LRU)
paging strategy in this manner, analyzing its performance on an input sequence
generated by a so-called diffuse adversary -- one that must choose each request
probabilitistically so that no page is chosen with probability more than some
fixed epsilon>0. They showed that LRU achieves the optimal competitive ratio
(for deterministic on-line algorithms), but they didn't determine the actual
ratio.
In this paper we estimate the optimal ratios within roughly a factor of two
for both deterministic strategies (e.g. least-recently-used and
first-in-first-out) and randomized strategies. Around the threshold epsilon ~
1/k (where k is the cache size), the optimal ratios are both Theta(ln k). Below
the threshold the ratios tend rapidly to O(1). Above the threshold the ratio is
unchanged for randomized strategies but tends rapidly to Theta(k) for
deterministic ones.
We also give an alternate proof of the optimality of LRU.Comment: Conference version appeared in SODA '98 as "Bounding the Diffuse
Adversary