4 research outputs found

    ABM: Looping Reference-Aware Cache Management Scheme for Media-on-Demand Server

    Get PDF
    Abstract. Legacy buffer cache management schemes for multimedia server are grounded at the assumption that the application sequentially accesses the multimedia file. However, user access pattern may not be sequential in some circumstances, for example, in distance learning application, where the user may exploit the VCR-like function(rewind and play) of the system and accesses the particular segments of video repeatedly in the middle of sequential playback. Such a looping reference can cause a significant performance degradation of interval-based caching algorithms. And thus an appropriate buffer cache management scheme is required in order to deliver desirable performance even under the workload that exhibits looping reference behavior. We propose Adaptive Buffer cache Management(ABM) scheme which intelligently adapts to the file access characteristics. For each opened file, ABM applies either the LRU replacement or the interval-based caching depending on the Looping Reference Indicator, which indicates that how strong temporally localized access pattern is. According to our experiment, ABM exhibits better buffer cache miss ratio than interval-based caching or LRU, especially when the workload exhibits not only sequential but also looping reference property

    Compiler-Directed File Layout Optimization for Hierarchical Storage Systems

    Get PDF

    On I/O Performance and Cost Efficiency of Cloud Storage: A Client\u27s Perspective

    Get PDF
    Cloud storage has gained increasing popularity in the past few years. In cloud storage, data are stored in the service provider’s data centers; users access data via the network and pay the fees based on the service usage. For such a new storage model, our prior wisdom and optimization schemes on conventional storage may not remain valid nor applicable to the emerging cloud storage. In this dissertation, we focus on understanding and optimizing the I/O performance and cost efficiency of cloud storage from a client’s perspective. We first conduct a comprehensive study to gain insight into the I/O performance behaviors of cloud storage from the client side. Through extensive experiments, we have obtained several critical findings and useful implications for system optimization. We then design a client cache framework, called Pacaca, to further improve end-to-end performance of cloud storage. Pacaca seamlessly integrates parallelized prefetching and cost-aware caching by utilizing the parallelism potential and object correlations of cloud storage. In addition to improving system performance, we have also made efforts to reduce the monetary cost of using cloud storage services by proposing a latency- and cost-aware client caching scheme, called GDS-LC, which can achieve two optimization goals for using cloud storage services: low access latency and low monetary cost. Our experimental results show that our proposed client-side solutions significantly outperform traditional methods. Our study contributes to inspiring the community to reconsider system optimization methods in the cloud environment, especially for the purpose of integrating cloud storage into the current storage stack as a primary storage layer

    An Implementation Study of a Detection-Based Adaptive Block Replacement Scheme

    No full text
    In this paper, we propose a new adaptive buffer management scheme called DEAR (DEtection based Adaptive Replacement) that automatically detects the block reference patterns of applications and applies different replacement policies to different applications based on the detected reference pattern. The proposed DEAR scheme uses a periodic process. Detection is made by associating block attribute values such as backward distance and frequency gathered at the (i - 1)-th invocation with forward distances of blocks referenced between the (i - 1)-th and i-th invocations. We implemented the DEAR scheme in FreeBSD 2.2.5 and measured its performance using several real applications. The results show that compared with the LRU buffer management scheme, the proposed scheme reduces the number of disk I/Os by up to 51% (with an average of 23%) and the response time by up to 35% (with an average of 12%) in the case of single application executions. For multiple applications, the proposed scheme reduces the number of disk I/Os by up to 20% (with an average of 12%) and the over-all response time by up to 18% (with an average of 8%)
    corecore