731 research outputs found

    Simple optimality proofs for Least Recently Used in the presence of locality of reference

    Get PDF
    It is well known that competitive analysis yields results that do not reflect the observed performance of online paging algorithms. Many deterministic paging algorithms achieve the same competitive ratio, ranging from inefficient strategies as flush-when-full to the well-performing least-recently-used (LRU). In this paper, we study this fundamental online problem from the viewpoint of stochastic dominance. We give simple proofs that whensequences are drawn from distributions modelling locality of reference, LRU stochastically dominates any other online paging algorithm. As a byproduct, we obtain simple proofs of some earlier results.operations research and management science;

    Probabilistic alternatives for competitive analysis

    Get PDF
    In the last 20 years competitive analysis has become the main tool for analyzing the quality of online algorithms. Despite of this, competitive analysis has also been criticized: it sometimes cannot discriminate between algorithms that exhibit significantly different empirical behavior or it even favors an algorithm that is worse from an empirical point of view. Therefore, there have been several approaches to circumvent these drawbacks. In this survey, we discuss probabilistic alternatives for competitive analysis.operations research and management science;

    On the Incomparability of Cache Algorithms in Terms of Timing Leakage

    Full text link
    Modern computer architectures rely on caches to reduce the latency gap between the CPU and main memory. While indispensable for performance, caches pose a serious threat to security because they leak information about memory access patterns of programs via execution time. In this paper, we present a novel approach for reasoning about the security of cache algorithms with respect to timing leaks. The basis of our approach is the notion of leak competitiveness, which compares the leakage of two cache algorithms on every possible program. Based on this notion, we prove the following two results: First, we show that leak competitiveness is symmetric in the cache algorithms. This implies that no cache algorithm dominates another in terms of leakage via a program's total execution time. This is in contrast to performance, where it is known that such dominance relationships exist. Second, when restricted to caches with finite control, the leak-competitiveness relationship between two cache algorithms is either asymptotically linear or constant. No other shapes are possible

    Probabilistic analysis of Online Bin Coloring algorithms via Stochastic Comparison

    Get PDF
    This paper proposes a new method for probabilistic analysis of online algorithms that is based on the notion of stochastic dominance. We develop the method for the Online Bin Coloring problem introduced by Krumke et al. Using methods for the stochastic comparison of Markov chains we establish the strong result that the performance of the online algorithm GreedyFit is stochastically dominated by the performance of the algorithm OneBin for any number of items processed. This result gives a more realistic picture than competitive analysis and explains the behavior observed in simulations.mathematical applications;

    The Frequent Items Problem in Online Streaming under Various Performance Measures

    Full text link
    In this paper, we strengthen the competitive analysis results obtained for a fundamental online streaming problem, the Frequent Items Problem. Additionally, we contribute with a more detailed analysis of this problem, using alternative performance measures, supplementing the insight gained from competitive analysis. The results also contribute to the general study of performance measures for online algorithms. It has long been known that competitive analysis suffers from drawbacks in certain situations, and many alternative measures have been proposed. However, more systematic comparative studies of performance measures have been initiated recently, and we continue this work, using competitive analysis, relative interval analysis, and relative worst order analysis on the Frequent Items Problem.Comment: IMADA-preprint-c

    Exact distributional analysis of online algorithms with lookahead

    Get PDF
    In online optimization, input data is revealed sequentially. Optimization problems in practice often exhibit this type of information disclosure as opposed to standard offline optimization where all information is known in advance. We analyze the performance of algorithms for online optimization with lookahead using a holistic distributional approach. To this end, we first introduce the performance measurement method of counting distribution functions. Then, we derive analytical expressions for the counting distribution functions of the objective value and the performance ratio in elementary cases of the online bin packing and the online traveling salesman problem. For bin packing, we also establish a relation between algorithm processing and the Catalan numbers. The paper shows that an exact analysis is strongly interconnected to the combinatorial structure of the problem and algorithm under consideration. Results further indicate that the value of lookahead heavily relies on the problem itself. The analysis also shows that exact distributional analysis could be used in order to discover key effects and identify related root causes in relatively simple problem settings. These insights can then be transferred to the analysis of more complex settings where the introduced performance measurement approach has to be used on an approximative basis (e.g., in a simulation-based optimization)

    An Associativity Threshold Phenomenon in Set-Associative Caches

    Full text link
    In an α\alpha-way set-associative cache, the cache is partitioned into disjoint sets of size α\alpha, and each item can only be cached in one set, typically selected via a hash function. Set-associative caches are widely used and have many benefits, e.g., in terms of latency or concurrency, over fully associative caches, but they often incur more cache misses. As the set size α\alpha decreases, the benefits increase, but the paging costs worsen. In this paper we characterize the performance of an α\alpha-way set-associative LRU cache of total size kk, as a function of α=α(k)\alpha = \alpha(k). We prove the following, assuming that sets are selected using a fully random hash function: - For α=ω(logk)\alpha = \omega(\log k), the paging cost of an α\alpha-way set-associative LRU cache is within additive O(1)O(1) of that a fully-associative LRU cache of size (1o(1))k(1-o(1))k, with probability 11/poly(k)1 - 1/\operatorname{poly}(k), for all request sequences of length poly(k)\operatorname{poly}(k). - For α=o(logk)\alpha = o(\log k), and for all c=O(1)c = O(1) and r=O(1)r = O(1), the paging cost of an α\alpha-way set-associative LRU cache is not within a factor cc of that a fully-associative LRU cache of size k/rk/r, for some request sequence of length O(k1.01)O(k^{1.01}). - For α=ω(logk)\alpha = \omega(\log k), if the hash function can be occasionally changed, the paging cost of an α\alpha-way set-associative LRU cache is within a factor 1+o(1)1 + o(1) of that a fully-associative LRU cache of size (1o(1))k(1-o(1))k, with probability 11/poly(k)1 - 1/\operatorname{poly}(k), for request sequences of arbitrary (e.g., super-polynomial) length. Some of our results generalize to other paging algorithms besides LRU, such as least-frequently used (LFU)
    corecore