570 research outputs found

    On-Line File Caching

    Full text link
    In the on-line file-caching problem problem, the input is a sequence of requests for files, given on-line (one at a time). Each file has a non-negative size and a non-negative retrieval cost. The problem is to decide which files to keep in a fixed-size cache so as to minimize the sum of the retrieval costs for files that are not in the cache when requested. The problem arises in web caching by browsers and by proxies. This paper describes a natural generalization of LRU called Landlord and gives an analysis showing that it has an optimal performance guarantee (among deterministic on-line algorithms). The paper also gives an analysis of the algorithm in a so-called ``loosely'' competitive model, showing that on a ``typical'' cache size, either the performance guarantee is O(1) or the total retrieval cost is insignificant.Comment: ACM-SIAM Symposium on Discrete Algorithms (1998

    Hit Ratio Driven Mobile Edge Caching Scheme for Video on Demand Services

    Full text link
    More and more scholars focus on mobile edge computing (MEC) technology, because the strong storage and computing capabilities of MEC servers can reduce the long transmission delay, bandwidth waste, energy consumption, and privacy leaks in the data transmission process. In this paper, we study the cache placement problem to determine how to cache videos and which videos to be cached in a mobile edge computing system. First, we derive the video request probability by taking into account video popularity, user preference and the characteristic of video representations. Second, based on the acquired request probability, we formulate a cache placement problem with the objective to maximize the cache hit ratio subject to the storage capacity constraints. Finally, in order to solve the formulated problem, we transform it into a grouping knapsack problem and develop a dynamic programming algorithm to obtain the optimal caching strategy. Simulation results show that the proposed algorithm can greatly improve the cache hit ratio

    A flexible receiver-driven cache replacement scheme for continuous media objects in best-effort networks

    Full text link
    In this paper, we investigate the potential of caching to improve quality of reception (QoR) in the context of continuous media applications over best-effort networks. Specifically, we investigate the influence of parameters such as loss rate, jitter, delay and area in determining a proxy\u27s cache contents. We propose the use of a flexible cost function in caching algorithms and develop a framework for benchmarking continuous media caching algorithms. The cost function incorporates parameters in which, an administrator and or a client can tune to influence a proxy\u27s cache. Traditional caching systems typically base decisions around static schemes that do not take into account the interest of their receiver pool. Based on the flexible cost function, an improvised Greedy Dual (GD) algorithm called GD-multi has been developed for layered multiresolution multimedia streams. The effectiveness of the proposed scheme is evaluated by simulation-based performance studies. Performance of several caching schemes are evaluated and compared with those of the proposed scheme. Our empirical results indicate GD-multi performs well despite employing a generalized caching policy

    Overview of Caching Mechanisms to Improve Hadoop Performance

    Full text link
    Nowadays distributed computing environments, large amounts of data are generated from different resources with a high velocity, rendering the data difficult to capture, manage, and process within existing relational databases. Hadoop is a tool to store and process large datasets in a parallel manner across a cluster of machines in a distributed environment. Hadoop brings many benefits like flexibility, scalability, and high fault tolerance; however, it faces some challenges in terms of data access time, I/O operation, and duplicate computations resulting in extra overhead, resource wastage, and poor performance. Many researchers have utilized caching mechanisms to tackle these challenges. For example, they have presented approaches to improve data access time, enhance data locality rate, remove repetitive calculations, reduce the number of I/O operations, decrease the job execution time, and increase resource efficiency. In the current study, we provide a comprehensive overview of caching strategies to improve Hadoop performance. Additionally, a novel classification is introduced based on cache utilization. Using this classification, we analyze the impact on Hadoop performance and discuss the advantages and disadvantages of each group. Finally, a novel hybrid approach called Hybrid Intelligent Cache (HIC) that combines the benefits of two methods from different groups, H-SVM-LRU and CLQLMRS, is presented. Experimental results show that our hybrid method achieves an average improvement of 31.2% in job execution time

    Web-log mining for predictive web caching

    Full text link

    Context prediction-based prefetching in software-defined wireless networks

    Get PDF
    In this master thesis we focus on improving in-network caching for mobile users in a large campus WiFi network. First we pinpoint the negative effects of mobility on network conditions and user experience. We propose a method leveraging SDN technology to redirect users' requests to optimally located cache servers, resulting in improved user experience and lowered burden on the backhaul and core network links. Our contribution is a network application that controls the flows in the network via an SDN controller. The application takes user's movement traces as an input, computes the optimal location of cache servers in the network and redirects user's flows accordingly. We tested our solution in a Mininet network simulator. We devised multiple scenarios using real-world movement traces from Dartmouth Campus. We measured requests delay as the main characteristic for user experience and data traffic over core and backhaul links as an indicator of network health. Our experiments show that for mobile users our dynamic redirection approach provides noticeable improvements over traditional, static caching methods

    Distributed Caching for Processing Raw Arrays

    Get PDF
    As applications continue to generate multi-dimensional data at exponentially increasing rates, fast analytics to extract meaningful results is becoming extremely important. The database community has developed array databases that alleviate this problem through a series of techniques. In-situ mechanisms provide direct access to raw data in the original format---without loading and partitioning. Parallel processing scales to the largest datasets. In-memory caching reduces latency when the same data are accessed across a workload of queries. However, we are not aware of any work on distributed caching of multi-dimensional raw arrays. In this paper, we introduce a distributed framework for cost-based caching of multi-dimensional arrays in native format. Given a set of files that contain portions of an array and an online query workload, the framework computes an effective caching plan in two stages. First, the plan identifies the cells to be cached locally from each of the input files by continuously refining an evolving R-tree index. In the second stage, an optimal assignment of cells to nodes that collocates dependent cells in order to minimize the overall data transfer is determined. We design cache eviction and placement heuristic algorithms that consider the historical query workload. A thorough experimental evaluation over two real datasets in three file formats confirms the superiority - by as much as two orders of magnitude - of the proposed framework over existing techniques in terms of cache overhead and workload execution time

    From Traditional Adaptive Data Caching to Adaptive Context Caching: A Survey

    Full text link
    Context data is in demand more than ever with the rapid increase in the development of many context-aware Internet of Things applications. Research in context and context-awareness is being conducted to broaden its applicability in light of many practical and technical challenges. One of the challenges is improving performance when responding to large number of context queries. Context Management Platforms that infer and deliver context to applications measure this problem using Quality of Service (QoS) parameters. Although caching is a proven way to improve QoS, transiency of context and features such as variability, heterogeneity of context queries pose an additional real-time cost management problem. This paper presents a critical survey of state-of-the-art in adaptive data caching with the objective of developing a body of knowledge in cost- and performance-efficient adaptive caching strategies. We comprehensively survey a large number of research publications and evaluate, compare, and contrast different techniques, policies, approaches, and schemes in adaptive caching. Our critical analysis is motivated by the focus on adaptively caching context as a core research problem. A formal definition for adaptive context caching is then proposed, followed by identified features and requirements of a well-designed, objective optimal adaptive context caching strategy.Comment: This paper is currently under review with ACM Computing Surveys Journal at this time of publishing in arxiv.or
    • …
    corecore