27 research outputs found

    LRU is better than FIFO under the independent reference model

    Get PDF

    FIFO anomaly is unbounded

    Get PDF
    Virtual memory of computers is usually implemented by demand paging. For some page replacement algorithms the number of page faults may increase as the number of page frames increases. Belady, Nelson and Shedler constructed reference strings for which page replacement algorithm FIFO produces near twice more page faults in a larger memory than in a smaller one. They formulated the conjecture that 2 is a general bound. We prove that this ratio can be arbitrarily large

    Stationary Distribution of a Generalized LRU-MRU Content Cache

    Full text link
    Many different caching mechanisms have been previously proposed, exploring different insertion and eviction policies and their performance individually and as part of caching networks. We obtain a novel closed-form stationary invariant distribution for a generalization of LRU and MRU caching nodes under a reference Markov model. Numerical comparisons are made with an "Incremental Rank Progress" (IRP a.k.a. CLIMB) and random eviction (a.k.a. random replacement) methods under a steady-state Zipf popularity distribution. The range of cache hit probabilities is smaller under MRU and larger under IRP compared to LRU. We conclude with the invariant distribution for a special case of a random-eviction caching tree-network and associated discussion

    Relative Interval Analysis of Paging Algorithms on Access Graphs

    Full text link
    Access graphs, which have been used previously in connection with competitive analysis and relative worst order analysis to model locality of reference in paging, are considered in connection with relative interval analysis. The algorithms LRU, FIFO, FWF, and FAR are compared using the path, star, and cycle access graphs. In this model, some of the expected results are obtained. However, although LRU is found to be strictly better than FIFO on paths, it has worse performance on stars, cycles, and complete graphs, in this model. We solve an open question from [Dorrigiv, Lopez-Ortiz, Munro, 2009], obtaining tight bounds on the relationship between LRU and FIFO with relative interval analysis.Comment: IMADA-preprint-c

    Truly Online Paging with Locality of Reference

    Full text link
    The competitive analysis fails to model locality of reference in the online paging problem. To deal with it, Borodin et. al. introduced the access graph model, which attempts to capture the locality of reference. However, the access graph model has a number of troubling aspects. The access graph has to be known in advance to the paging algorithm and the memory required to represent the access graph itself may be very large. In this paper we present truly online strongly competitive paging algorithms in the access graph model that do not have any prior information on the access sequence. We present both deterministic and randomized algorithms. The algorithms need only O(k log n) bits of memory, where k is the number of page slots available and n is the size of the virtual address space. I.e., asymptotically no more memory than needed to store the virtual address translation table. We also observe that our algorithms adapt themselves to temporal changes in the locality of reference. We model temporal changes in the locality of reference by extending the access graph model to the so called extended access graph model, in which many vertices of the graph can correspond to the same virtual page. We define a measure for the rate of change in the locality of reference in G denoted by Delta(G). We then show our algorithms remain strongly competitive as long as Delta(G) >= (1+ epsilon)k, and no truly online algorithm can be strongly competitive on a class of extended access graphs that includes all graphs G with Delta(G) >= k- o(k).Comment: 37 pages. Preliminary version appeared in FOCS '9

    Online Coded Caching

    Full text link
    We consider a basic content distribution scenario consisting of a single origin server connected through a shared bottleneck link to a number of users each equipped with a cache of finite memory. The users issue a sequence of content requests from a set of popular files, and the goal is to operate the caches as well as the server such that these requests are satisfied with the minimum number of bits sent over the shared link. Assuming a basic Markov model for renewing the set of popular files, we characterize approximately the optimal long-term average rate of the shared link. We further prove that the optimal online scheme has approximately the same performance as the optimal offline scheme, in which the cache contents can be updated based on the entire set of popular files before each new request. To support these theoretical results, we propose an online coded caching scheme termed coded least-recently sent (LRS) and simulate it for a demand time series derived from the dataset made available by Netflix for the Netflix Prize. For this time series, we show that the proposed coded LRS algorithm significantly outperforms the popular least-recently used (LRU) caching algorithm.Comment: 15 page
    corecore