326 research outputs found

    Optimal Eviction Policies for Stochastic Address Traces

    Full text link
    The eviction problem for memory hierarchies is studied for the Hidden Markov Reference Model (HMRM) of the memory trace, showing how miss minimization can be naturally formulated in the optimal control setting. In addition to the traditional version assuming a buffer of fixed capacity, a relaxed version is also considered, in which buffer occupancy can vary and its average is constrained. Resorting to multiobjective optimization, viewing occupancy as a cost rather than as a constraint, the optimal eviction policy is obtained by composing solutions for the individual addressable items. This approach is then specialized to the Least Recently Used Stack Model (LRUSM), a type of HMRM often considered for traces, which includes V-1 parameters, where V is the size of the virtual space. A gain optimal policy for any target average occupancy is obtained which (i) is computable in time O(V) from the model parameters, (ii) is optimal also for the fixed capacity case, and (iii) is characterized in terms of priorities, with the name of Least Profit Rate (LPR) policy. An O(log C) upper bound (being C the buffer capacity) is derived for the ratio between the expected miss rate of LPR and that of OPT, the optimal off-line policy; the upper bound is tightened to O(1), under reasonable constraints on the LRUSM parameters. Using the stack-distance framework, an algorithm is developed to compute the number of misses incurred by LPR on a given input trace, simultaneously for all buffer capacities, in time O(log V) per access. Finally, some results are provided for miss minimization over a finite horizon and over an infinite horizon under bias optimality, a criterion more stringent than gain optimality.Comment: 37 pages, 3 figure

    GreedyDual-Join: Locality-Aware Buffer Management for Approximate Join Processing Over Data Streams

    Full text link
    We investigate adaptive buffer management techniques for approximate evaluation of sliding window joins over multiple data streams. In many applications, data stream processing systems have limited memory or have to deal with very high speed data streams. In both cases, computing the exact results of joins between these streams may not be feasible, mainly because the buffers used to compute the joins contain much smaller number of tuples than the tuples contained in the sliding windows. Therefore, a stream buffer management policy is needed in that case. We show that the buffer replacement policy is an important determinant of the quality of the produced results. To that end, we propose GreedyDual-Join (GDJ) an adaptive and locality-aware buffering technique for managing these buffers. GDJ exploits the temporal correlations (at both long and short time scales), which we found to be prevalent in many real data streams. We note that our algorithm is readily applicable to multiple data streams and multiple joins and requires almost no additional system resources. We report results of an experimental study using both synthetic and real-world data sets. Our results demonstrate the superiority and flexibility of our approach when contrasted to other recently proposed techniques

    An analytical model for Loc/ID mappings caches

    Get PDF
    Concerns regarding the scalability of the interdomain routing have encouraged researchers to start elaborating a more robust Internet architecture. While consensus on the exact form of the solution is yet to be found, the need for a semantic decoupling of a node's location and identity is generally accepted as a promising way forward. However, this typically requires the use of caches that store temporal bindings between the two namespaces, to avoid hampering router packet forwarding speeds. In this article, we propose a methodology for an analytical analysis of cache performance that relies on the working-set theory. We first identify the conditions that network traffic must comply with for the theory to be applicable and then develop a model that predicts average cache miss rates relying on easily measurable traffic parameters. We validate the result by emulation, using real packet traces collected at the egress points of a campus and an academic network. To prove its versatility, we extend the model to consider cache polluting user traffic and observe that simple, low intensity attacks drastically reduce performance, whereby manufacturers should either overprovision router memory or implement more complex cache eviction policies.Peer ReviewedPostprint (author's final draft

    Unravelling the Impact of Temporal and Geographical Locality in Content Caching Systems

    Get PDF
    To assess the performance of caching systems, the definition of a proper process describing the content requests generated by users is required. Starting from the analysis of traces of YouTube video requests collected inside operational networks, we identify the characteristics of real traffic that need to be represented and those that instead can be safely neglected. Based on our observations, we introduce a simple, parsimonious traffic model, named Shot Noise Model (SNM), that allows us to capture temporal and geographical locality of content popularity. The SNM is sufficiently simple to be effectively employed in both analytical and scalable simulative studies of caching systems. We demonstrate this by analytically characterizing the performance of the LRU caching policy under the SNM, for both a single cache and a network of caches. With respect to the standard Independent Reference Model (IRM), some paradigmatic shifts, concerning the impact of various traffic characteristics on cache performance, clearly emerge from our results.Comment: 14 pages, 11 Figures, 2 Appendice

    MUSTACHE: Multi-Step-Ahead Predictions for Cache Eviction

    Full text link
    In this work, we propose MUSTACHE, a new page cache replacement algorithm whose logic is learned from observed memory access requests rather than fixed like existing policies. We formulate the page request prediction problem as a categorical time series forecasting task. Then, our method queries the learned page request forecaster to obtain the next kk predicted page memory references to better approximate the optimal B\'el\'ady's replacement algorithm. We implement several forecasting techniques using advanced deep learning architectures and integrate the best-performing one into an existing open-source cache simulator. Experiments run on benchmark datasets show that MUSTACHE outperforms the best page replacement heuristic (i.e., exact LRU), improving the cache hit ratio by 1.9% and reducing the number of reads/writes required to handle cache misses by 18.4% and 10.3%

    Cache Miss Estimation for Non-Stationary Request Processes

    Full text link
    The aim of the paper is to evaluate the miss probability of a Least Recently Used (LRU) cache, when it is offered a non-stationary request process given by a Poisson cluster point process. First, we construct a probability space using Palm theory, describing how to consider a tagged document with respect to the rest of the request process. This framework allows us to derive a general integral formula for the expected number of misses of the tagged document. Then, we consider the limit when the cache size and the arrival rate go to infinity proportionally, and use the integral formula to derive an asymptotic expansion of the miss probability in powers of the inverse of the cache size. This enables us to quantify and improve the accuracy of the so-called Che approximation

    DR-Cache: Distributed Resilient Caching with Latency Guarantees

    Get PDF
    The dominant application in today’s Internet is content streaming, which is increasingly relying on caches to meet the stringent conditions on the latency between content servers and end-users. These systems routinely face the challenges of limited bandwidth capacities and network server failures, which degrade caching performance. In this paper, we study the problem of optimally allocating content over a resilient caching network, in which each cache may fail under some situations. Given content request rates and multiple routing paths, we formulate an optimization problem to maximize the expected caching gain, i.e., the reduction of latency due to intermediate caching. The offline version of this problem is NP-hard. We first propose a centralized, offline algorithm and show that a solution with (1-1/e) approximation ratio to the optimal can be constructed. We then propose a distributed ascent algorithm based on the concave relaxation of the expected gain. Informed by the results of our analysis, we finally propose a distributed resilient caching algorithm (DR-Cache) that is simple and adaptive to network failures. We show numerically that DR-Cache significantly outperforms other candidate algorithms under synthetic requests, as well as real world traces over a class of network topologies
    • …
    corecore