11,055 research outputs found
Using Grouped Linear Prediction and Accelerated Reinforcement Learning for Online Content Caching
Proactive caching is an effective way to alleviate peak-hour traffic
congestion by prefetching popular contents at the wireless network edge. To
maximize the caching efficiency requires the knowledge of content popularity
profile, which however is often unavailable in advance. In this paper, we first
propose a new linear prediction model, named grouped linear model (GLM) to
estimate the future content requests based on historical data. Unlike many
existing works that assumed the static content popularity profile, our model
can adapt to the temporal variation of the content popularity in practical
systems due to the arrival of new contents and dynamics of user preference.
Based on the predicted content requests, we then propose a reinforcement
learning approach with model-free acceleration (RLMA) for online cache
replacement by taking into account both the cache hits and replacement cost.
This approach accelerates the learning process in non-stationary environment by
generating imaginary samples for Q-value updates. Numerical results based on
real-world traces show that the proposed prediction and learning based online
caching policy outperform all considered existing schemes.Comment: 6 pages, 4 figures, ICC 2018 worksho
Recommended from our members
Parallel data compression
Data compression schemes remove data redundancy in communicated and stored data and increase the effective capacities of communication and storage devices. Parallel algorithms and implementations for textual data compression are surveyed. Related concepts from parallel computation and information theory are briefly discussed. Static and dynamic methods for codeword construction and transmission on various models of parallel computation are described. Included are parallel methods which boost system speed by coding data concurrently, and approaches which employ multiple compression techniques to improve compression ratios. Theoretical and empirical comparisons are reported and areas for future research are suggested
Advanced propulsion system concept for hybrid vehicles
A series hybrid system, utilizing a free piston Stirling engine with a linear alternator, and a parallel hybrid system, incorporating a kinematic Stirling engine, are analyzed for various specified reference missions/vehicles ranging from a small two passenger commuter vehicle to a van. Parametric studies for each configuration, detail tradeoff studies to determine engine, battery and system definition, short term energy storage evaluation, and detail life cycle cost studies were performed. Results indicate that the selection of a parallel Stirling engine/electric, hybrid propulsion system can significantly reduce petroleum consumption by 70 percent over present conventional vehicles
Space power distribution system technology. Volume 2: Autonomous power management
Electrical power subsystem requirements, power management system functional requirements, algorithms, power management subsystem, hardware development, and trade studies and analyses are discussed
Efficient caching algorithms for memory management in computer systems
As disk performance continues to lag behind that of memory systems and processors, fully utilizing memory to reduce disk accesses is a highly effective effort to improve the entire system performance. Furthermore, to serve the applications running on a computer in distributed systems, not only the local memory but also the memory on remote servers must be effectively managed to minimize I/O operations. The critical challenges in an effective memory cache management include: (1) Insightfully understanding and quantifying the locality inherent in the memory access requests; (2) Effectively utilizing the locality information in replacement algorithms; (3) Intelligently placing and replacing data in the multi-level caches of a distributed system; (4) Ensuring that the overheads of the proposed schemes are acceptable.;This dissertation provides solutions and makes unique and novel contributions in application locality quantification, general replacement algorithms, low-cost replacement policy, thrashing protection, as well as multi-level cache management in a distributed system. First, the dissertation proposes a new method to quantify locality strength, and accurately to identify the data with strong locality. It also provides a new replacement algorithm, which significantly outperforms existing algorithms. Second, considering the extremely low-cost requirements on replacement policies in virtual memory management, the dissertation proposes a policy meeting the requirements, and considerably exceeding the performance existing policies. Third, the dissertation provides an effective scheme to protect the system from thrashing for running memory-intensive applications. Finally, the dissertation provides a multi-level block placement and replacement protocol in a distributed client-server environment, exploiting non-uniform locality strengths in the I/O access requests.;The methodology used in this study include careful application behavior characterization, system requirement analysis, algorithm designs, trace-driven simulation, and system implementations. A main conclusion of the work is that there is still much room for innovation and significant performance improvement for the seemingly mature and stable policies that have been broadly used in the current operating system design
Cache Equalizer: A Cache Pressure Aware Block Placement Scheme for Large-Scale Chip Multiprocessors
This paper describes Cache Equalizer (CE), a novel distributed cache management scheme for large scale chip multiprocessors (CMPs). Our work is motivated by large asymmetry in cache sets usages. CE decouples the physical locations of cache blocks from their addresses for the sake of reducing misses caused by destructive interferences. Temporal pressure at the on-chip last-level cache, is continuously collected at a group (comprised of cache sets) granularity, and periodically recorded at the memory controller to guide the placement process. An incoming block is consequently placed at a cache group that exhibits the minimum pressure. CE provides Quality of Service (QoS) by robustly offering better performance than the baseline shared NUCA cache. Simulation results using a full-system simulator demonstrate that CE outperforms shared NUCA caches by an average of 15.5% and by as much as 28.5% for the benchmark programs we examined. Furthermore, evaluations manifested the outperformance of CE versus related CMP cache designs
- …