26 research outputs found

    Evolution of cache replacement policies to track heavy-hitter flows

    Full text link

    Evolution of Cache Replacement Policies to Track Heavy-hitter Flows

    Get PDF
    Several important network applications cannot easily scale to higher data rates without requiring focusing just on the large traffic flows. Recent works have discussed algorithmic solutions that trade-off accuracy to gain efficiency for filtering and tracking the so-called "heavy-hitters". However, a major limit is that flows must initially go through a filtering process, making it impossible to track state associated with the first few packets of the flow. In this paper, we propose a different paradigm in tracking the large flows which overcomes this limit. We view the problem as that of managing a small flow cache with a finely tuned replacement policy that strives to avoid evicting the heavy-hitters. Our scheme starts from recorded traffic traces and uses Genetic Algorithms to evolve a replacement policy tailored for supporting seamless, stateful traffic-processing. We evaluate our scheme in terms of missed heavy-hitters: it performs close to the optimal, oracle-based policy, and when compared to other standard policies, it consistently outperforms them, even by a factor of two in most cases. © 2011 Springer-Verlag

    Cut-and-paste file-systems: integrating simulators and file-systems

    Get PDF
    We have implemented an integrated and configurable file system called the PFS and a trace-driven file-system simulator called Patsy. Patsy is used for off-line analysis of file-system algorithms, PFS is used for on-line file-system data storage. Algorithms are first analyzed in Patsy and when we are satisfied\ud with the performance results, migrated into PFS for on-line usage. Since Patsy and PFS are derived from a common cut-and-paste file-system framework, this migration proceeds smoothly.\ud We have found this integration quite useful: algorithm bottlenecks have been found through Patsy that could have led to performance degradations in PFS. Off-line simulators are simpler to analyze compared to on-line file-systems because a work load can repeatedly be replayed on the same off-line simulator. This is almost impossible in on-line file-systems since it is hard to provide similar conditions for each experiment run. Since simulator and file-system are integrated (hence, use the same code), experiment results from the simulator have relevance in the real system. \ud This paper describes the cut-and-paste framework, the instantiation of the framework to PFS and Patsy and finally, some of the experiments we conducted in Patsy

    Impact of Segmentation and Popularity-based Cache Replacement Policies on Named Data Networking

    Get PDF
    The data distribution mechanism of internet protocol (IP) technology is inefficient because it necessitates the user to await a response from the server.  Named data networking (NDN) is a cutting-edge technology being assessed for enhancing IP networks, primarily because it incorporates a data packet caching technique on every router. However, the effectiveness of this approach is highly dependent on the router's content capacity, thus requiring the use data replacement mechanism when the router capacity is full.  The least recently used (LRU) method is employed for cache replacement policy; yet, it is considered ineffective as it neglects the content's popularity. The LRU algorithm replaces the infrequently requested data, leading to inefficient caching of popular data when multiple users constantly request it.  To address this problem, we propose a segmented LRU (SLRU) replacement strategy that considers content popularity. The SLRU will evaluate both popular content and content that has previously been popular in two segment categories, namely the probationary and protected segments.  Icarus simulator was used to evaluate multiple comprehensive scenarios.  Our experimental results show that the SLRU obtains a better cache hit ratio (CHR) and able to minimize latency and link load compared to existing cache replacement policies such as First In, First Out (FIFO), LRU, and Climb

    Predictive caching and prefetching of query results in search engines

    Full text link

    Universiteit Leiden Opleiding Informatica

    Get PDF
    Abstract S.M.A.C.K. is a small and simple operating system developed at LIACS to run on the BeagleBoard. The simple design allows students to learn how operating systems work. To keep it simple only the most essential features of an operating system are implemented. One essential feature that is missing is support for storage devices. The BeagleBoard does have an SD-card slot, but there are no drivers to use it. The only way to boot is using a RAM disk that is loaded by the boot loader. This is not very convenient. It would make S.M.A.C.K. more user friendly if it was possible to access data on SD-card and directly boot from it. This thesis will describe the implementation of an SD-card driver for S.M.A.C.K. to add a support for a permanent storage device and the implementation of two methods to improve performance: DMA and a buffer cache. Several experiments were then performed to test the effectiveness of DMA and the buffer cache on the BeagleBoard. The results show that a buffer cache and DMA transfers can improve the performance in specific situations.

    Replacement policies for a proxy cache

    Full text link

    Evaluation of Cache Inclusion Policies in Cache Management

    Get PDF
    Processor speed has been increasing at a higher rate than the speed of memories over the last years. Caches were designed to mitigate this gap and, ever since, several cache management techniques have been designed to further improve performance. Most techniques have been designed and evaluated on non-inclusive caches even though many modern processors implement either inclusive or exclusive policies. Exclusive caches benefit from a larger effective capacity, so they might become more popular when the number of cores per last-level cache increases. This thesis aims to demonstrate that the best cache management techniques for exclusive caches do not necessarily have to be the same as for non-inclusive or inclusive caches. To assess this statement we evaluated several cache management techniques with different inclusion policies, number of cores and cache sizes. We found that the configurations for inclusive and non-inclusive policies usually performed similarly, but for exclusive caches the best configurations were indeed different. Prefetchers impacted performance more than replacement policies, and determined which configurations were the best ones. Also, exclusive caches showed a higher speedup on multi-core. The least recently used (LRU) replacement policy is among the best policies for any prefetcher combination in exclusive caches but is the one used as a baseline in most cache replacement policy research. Therefore, we conclude that the results in this thesis motivate further research on prefetchers and replacement policies targeted to exclusive caches
    corecore