67,832 research outputs found

    A Low-Complexity Approach to Distributed Cooperative Caching with Geographic Constraints

    Get PDF
    We consider caching in cellular networks in which each base station is equipped with a cache that can store a limited number of files. The popularity of the files is known and the goal is to place files in the caches such that the probability that a user at an arbitrary location in the plane will find the file that she requires in one of the covering caches is maximized. We develop distributed asynchronous algorithms for deciding which contents to store in which cache. Such cooperative algorithms require communication only between caches with overlapping coverage areas and can operate in asynchronous manner. The development of the algorithms is principally based on an observation that the problem can be viewed as a potential game. Our basic algorithm is derived from the best response dynamics. We demonstrate that the complexity of each best response step is independent of the number of files, linear in the cache capacity and linear in the maximum number of base stations that cover a certain area. Then, we show that the overall algorithm complexity for a discrete cache placement is polynomial in both network size and catalog size. In practical examples, the algorithm converges in just a few iterations. Also, in most cases of interest, the basic algorithm finds the best Nash equilibrium corresponding to the global optimum. We provide two extensions of our basic algorithm based on stochastic and deterministic simulated annealing which find the global optimum. Finally, we demonstrate the hit probability evolution on real and synthetic networks numerically and show that our distributed caching algorithm performs significantly better than storing the most popular content, probabilistic content placement policy and Multi-LRU caching policies.Comment: 24 pages, 9 figures, presented at SIGMETRICS'1

    Stochastic Game based Cooperative Alternating Q-Learning Caching in Dynamic D2D Networks

    Get PDF
    Edge caching has become an effective solution to cope with the challenges brought by the massive content delivery in cellular networks. In device-to-device (D2D) enabled caching cellular networks with time-varying content popularity distribution and user terminal (UT) location, we model these dynamic networks as a stochastic game to design a cooperative cache placement policy. The cache placement reward of each UT is defined as the caching incentive minus the transmission power cost for content caching and sharing. We consider the long-term cache placement reward of all UTs in this stochastic game. In an effort to solve the stochastic game problem, we propose a multi-agent cooperative alternating Q-learning (CAQL) based cache placement algorithm. The caching control unit is defined to execute the proposed CAQL, in which, the cache placement policy of each UT is alternatively updated according to the stable policy of other UTs during the learning process, until the stable cache placement policy of all the UTs in the cell is obtained. We discuss the convergence and complexity of CAQL, which obtains the stable cache placement policy with low space complexity. Simulation results show that the proposed algorithm can effectively reduce the backhaul load and the average content access delay in dynamic networks

    Bounding Preemption Delay within Data Cache Reference Patterns for Real-Time Tasks

    Get PDF
    Caches have become invaluable for higher-end architectures to hide, in part, the increasing gap between processor speed and memory access times. While the effect of caches on timing predictability of single real-time tasks has been the focus of much research, bounding the overhead of cache warm-ups after preemptions remains a challenging problem, particularly for data caches. In this paper, we bound the penalty of cache interference for real-time tasks by providing accurate predictions of the data cache behavior across preemptions. For every task, we derive data cache reference patterns for all scalar and non-scalar references. Partial timing of a task is performed up to a preemption point using these patterns. The effects of cache interference are then analyzed using a settheoretic approach, which identifies the number and location of additional misses due to preemption. A feedback mechanism provides the means to interact with the timing analyzer, which subsequently times another interval of a task bounded by the next preemption. Our experimental results demonstrate that it is sufficient to consider the n most expensive preemption points, where n is the maximum possible number of preemptions. Further, it is shown that such accurate modeling of data cache behavior in preemptive systems significantly improves the WCET predictions for a task. To the best of our knowledge, our work of bounding preemption delay for data caches is unprecedented

    Predicting future location in mobile cache based on variable order of prediction-by-partial-matching algorithm

    Get PDF
    Mobile caching at the edge of the wireless network has been regarded as an ideal approach that can alleviate the user access latencies. While there is a problem that the user terminal (UT) is moving too fast when enter a serving cache area, it may not have enough time to acquire the required data from the cache. One solution is to predict the UT's future location and pre-prepare the requested content at the cache devices that will appear in the UT's future path. Once the UT arrive at the serving cache area, they can directly acquire the data since it is already at the location, rather than send in a request to update the cache. The key point to achieve this reliably is the accuracy of the location prediction. This paper presents a location prediction model based on prediction-by-partial-matching (PPM) algorithm in the mobile cache design. The performance of this model will be compared by using oneorder context, two-order context and three-order context, respectively. All the models will be evaluated in a real world data

    A Liquefaction Potential Map for Cache Valley, Utah

    Get PDF
    The identification of liquefaction susceptible soil deposits in Cache Valley, Utah and the relative potential that these deposits have for liquefaction were the two main purposes of this study. A liquefaction susceptibility map was developed to outline areas where liquefaction might occur during an earthquake. The susceptibility map was combined with a liquefaction opportunity map to produce a liquefaction potential map for Cache Valley, Utah. The opportunity map for Cache Valley was developed in a companion study, Greenwood (1978). The development of the susceptibility map and the opportunity map and combining them to form a liquefaction potential map for Cache Valley was based on a procedure developed by Youd and Perkins (1977). The liquefaction potential map is a general location map and will be a useful tool for preliminary planning by governmental agencies, planners, developers, and contractors. The use of the liquefaction potential map by these various groups will aid them in avoiding possible problem areas for project locations. It will also be a guide for further analysis of specific sites where liquefaction is probable

    Joint Resource Allocation and Cache Placement for Location-Aware Multi-User Mobile Edge Computing

    Get PDF
    With the growing demand for latency-critical and computation-intensive Internet of Things (IoT) services, mobile edge computing (MEC) has emerged as a promising technique to reinforce the computation capability of the resource-constrained mobile devices. To exploit the cloud-like functions at the network edge, service caching has been implemented to (partially) reuse the computation tasks, thus effectively reducing the delay incurred by data retransmissions and/or the computation burden due to repeated execution of the same task. In a multiuser cache-assisted MEC system, designs for service caching depend on users' preference for different types of services, which is at times highly correlated to the locations where the requests are made. In this paper, we exploit users' location-dependent service preference profiles to formulate a cache placement optimization problem in a multiuser MEC system. Specifically, we consider multiple representative locations, where users at the same location share the same preference profile for a given set of services. In a frequency-division multiple access (FDMA) setup, we jointly optimize the binary cache placement, edge computation resources and bandwidth allocation to minimize the expected weighted-sum energy of the edge server and the users with respect to the users' preference profile, subject to the bandwidth and the computation limitations, and the latency constraints. To effectively solve the mixed-integer non-convex problem, we propose a deep learning based offline cache placement scheme using a novel stochastic quantization based discrete-action generation method. In special cases, we also attain suboptimal caching decisions with low complexity leveraging the structure of the optimal solution. The simulations verify the performance of the proposed scheme and the effectiveness of service caching in general.Comment: 32 pages, 9 figures, submitted for possible journal publicatio

    Cache Coherence Protocol Design and Simulation Using IES (Invalid Exclusive read/write Shared) State

    Get PDF
    To improve the efficiency of a processor in recent multiprocessor systems to deal with data, cache memories are used to access data instead of main memory which reduces the latency of delay time. In such systems, when installing different caches in different processors in shared memory architecture, the difficulties appear when there is a need to maintain consistency between the cache memories of different processors. So, cache coherency protocol is very important in such kinds of system. MSI, MESI, MOSI, MOESI, etc. are the famous protocols to solve cache coherency problem. We have proposed in this research integrating two states of MESI's cache coherence protocol which are Exclusive and Modified, which responds to a request from reading and writing at the same time and that are exclusive to these requests. Also back to the main memory from one of the other processor that has a modified state is removed in using a proposed protocol when it is invalidated as a result of writing to that location that has the same address because in all cases it depends on the latest value written and if back to memory is used to protect data from loss; preprocessing steps to IES protocol is used to maintain and saving data in main memory when it evict from the cache. All of this leads to increased processor efficiency by reducing access to main memor

    A smart cache content update policy based on deep reinforcement learning

    Get PDF
    This paper proposes a DRL-based cache content update policy in the cache-enabled network to improve the cache hit ratio and reduce the average latency. In contrast to the existing policies, a more practical cache scenario is considered in this work, in which the content requests vary by both time and location. Considering the constraint of the limited cache capacity, the dynamic content update problem is modeled as a Markov decision process (MDP). Besides that, the deep Q-learning network (DQN) algorithm is utilised to solve the MDP problem. Specifically, the neural network is optimised to approximate the Q value where the training data are chosen from the experience replay memory. The DQN agent derives the optimal policy for the cache decision. Compared with the existing policies, the simulation results show that our proposed policy is 56%-64% improved in terms of the cache hit ratio and 56%-59% decreased in terms of the average latency
    • …
    corecore