30,325 research outputs found

    Adapting ACME to the database caching environment : a thesis presented in partial fulfilment of the requirements for the degree of Master of Science in Information Systems at Massey University

    Get PDF
    The field of database cache replacement has seen a great many replacement policies presented in the past few years. As the challenge to find the optimal replacement policy continues, new methods and techniques of determining cache victims have been proposed, with some methods having a greater effect on results than others. Adaptive algorithms attempt to adapt to changing patterns of data access by combining the benefits of other existing algorithms. Such adaptive algorithms have recently been proposed in the web-caching environment. However, there is a lack of such research in the area of database caching. This thesis investigates an attempt to adapt a recently proposed adaptive caching algorithm in the area of web-caching, known as Adaptive Caching with Multiple Experts (ACME), to the database environment. Recently proposed replacement policies are integrated into ACME'S existing policy pool, in an attempt to gauge its ability and robustness to readily incorporate new algorithms. The results suggest that ACME is indeed well-suited to the database environment, and performs as well as the best currently caching policy within its policy pool at any particular point in time in its request stream. Although execution time increases by integrating more policies into ACME, the overall time saved increases by avoiding disk reads due to higher hit rates and fewer misses on the cache

    Fundamental Limits of Coded Caching: Improved Delivery Rate-Cache Capacity Trade-off

    Get PDF
    A centralized coded caching system, consisting of a server delivering N popular files, each of size F bits, to K users through an error-free shared link, is considered. It is assumed that each user is equipped with a local cache memory with capacity MF bits, and contents can be proactively cached into these caches over a low traffic period; however, without the knowledge of the user demands. During the peak traffic period each user requests a single file from the server. The goal is to minimize the number of bits delivered by the server over the shared link, known as the delivery rate, over all user demand combinations. A novel coded caching scheme for the cache capacity of M= (N-1)/K is proposed. It is shown that the proposed scheme achieves a smaller delivery rate than the existing coded caching schemes in the literature when K > N >= 3. Furthermore, we argue that the delivery rate of the proposed scheme is within a constant multiplicative factor of 2 of the optimal delivery rate for cache capacities 1/K N >= 3.Comment: To appear in IEEE Transactions on Communication
    corecore