1,723 research outputs found

    Improving Cache Hits On Replacment Blocks Using Weighted LRU-LFU Combinations

    Get PDF
    Block replacement refers to the process of selecting a block of data or a cache line to be evicted or replaced when a new block needs to be brought into a cache or a memory hierarchy. In computer systems, block replacement policies are used in caching mechanisms, such as in CPU caches or disk caches, to determine which blocks are evicted when the cache is full and new data needs to be fetched. The combination of LRU (Least Recently Used) and LFU (Least Frequently Used) in a weighted manner is known as the "LFU2" algorithm. LFU2 is an enhanced caching algorithm that aims to leverage the benefits of both LRU and LFU by considering both recency and frequency of item access. In LFU2, each item in the cache is associated with two counters: the usage counter and the recency counter. The usage counter tracks the frequency of item access, while the recency counter tracks the recency of item access. These counters are used to calculate a combined weight for each item in the cache. Based on the experimental results, the LRU-LFU combination method succeeded in increasing cache hits from 94.8% on LFU and 95.5% on LFU to 96.6%

    Intelligent Cooperative Adaptive Weight Ranking Policy via dynamic aging based on NB and J48 classifiers

    Get PDF
    The increased usage of World Wide Web leads to increase in network traffic and create a bottleneck over the internet performance.  For most people, the accessing speed or the response time is the most critical factor when using the internet. Reducing response time was done by using web proxy cache technique that storing a copy of pages between client and server sides. If requested pages are cached in the proxy, there is no need to access the server. But, the cache size is limited, so cache replacement algorithms are used to remove pages from the cache when it is full. On the other hand, the conventional algorithms for replacement such as Least Recently Use (LRU), First in First Out (FIFO), Least Frequently Use (LFU), Randomised Policy, etc. may discard essential pages just before use. Furthermore, using conventional algorithms cannot be well optimized since it requires some decision to evict intelligently before a page is replaced. Hence, this paper proposes an integration of Adaptive Weight Ranking Policy (AWRP) with intelligent classifiers (NB-AWRP-DA and J48-AWRP-DA) via dynamic aging factor.  To enhance classifiers power of prediction before integrating them with AWRP, particle swarm optimization (PSO) automated wrapper feature selection methods are used to choose the best subset of features that are relevant and influence classifiers prediction accuracy.   Experimental Result shows that NB-AWRP-DA enhances the performance of web proxy cache across multi proxy datasets by 4.008%,4.087% and 14.022% over LRU, LFU, and FIFO while, J48-AWRP-DA increases HR by 0.483%, 0.563% and 10.497% over LRU, LFU, and FIFO respectively.  Meanwhile, BHR of NB-AWRP-DA rises by 0.9911%,1.008% and 11.5842% over LRU, LFU, and FIFO respectively while 0.0204%, 0.0379% and 10.6136 for LRU, LFU, FIFO respectively using J48-AWRP-DA

    V-Cache: Towards Flexible Resource Provisioning for Multi-tier Applications in IaaS Clouds

    Full text link
    Abstract—Although the resource elasticity offered by Infrastructure-as-a-Service (IaaS) clouds opens up opportunities for elastic application performance, it also poses challenges to application management. Cluster applications, such as multi-tier websites, further complicates the management requiring not only accurate capacity planning but also proper partitioning of the resources into a number of virtual machines. Instead of burdening cloud users with complex management, we move the task of determining the optimal resource configuration for cluster applications to cloud providers. We find that a structural reorganization of multi-tier websites, by adding a caching tier which runs on resources debited from the original resource budget, significantly boosts application performance and reduces resource usage. We propose V-Cache, a machine learning based approach to flexible provisioning of resources for multi-tier applications in clouds. V-Cache transparently places a caching proxy in front of the application. It uses a genetic algorithm to identify the incoming requests that benefit most from caching and dynamically resizes the cache space to accommodate these requests. We develop a reinforcement learning algorithm to optimally allocate the remaining capacity to other tiers. We have implemented V-Cache on a VMware-based cloud testbed. Exper-iment results with the RUBiS and WikiBench benchmarks show that V-Cache outperforms a representative capacity management scheme and a cloud-cache based resource provisioning approach by at least 15 % in performance, and achieves at least 11 % and 21 % savings on CPU and memory resources, respectively. I

    LogBase: A Scalable Log-structured Database System in the Cloud

    Full text link
    Numerous applications such as financial transactions (e.g., stock trading) are write-heavy in nature. The shift from reads to writes in web applications has also been accelerating in recent years. Write-ahead-logging is a common approach for providing recovery capability while improving performance in most storage systems. However, the separation of log and application data incurs write overheads observed in write-heavy environments and hence adversely affects the write throughput and recovery time in the system. In this paper, we introduce LogBase - a scalable log-structured database system that adopts log-only storage for removing the write bottleneck and supporting fast system recovery. LogBase is designed to be dynamically deployed on commodity clusters to take advantage of elastic scaling property of cloud environments. LogBase provides in-memory multiversion indexes for supporting efficient access to data maintained in the log. LogBase also supports transactions that bundle read and write operations spanning across multiple records. We implemented the proposed system and compared it with HBase and a disk-based log-structured record-oriented system modeled after RAMCloud. The experimental results show that LogBase is able to provide sustained write throughput, efficient data access out of the cache, and effective system recovery.Comment: VLDB201
    corecore