70 research outputs found

    Dynamic Hierarchical Cache Management for Cloud RAN and Multi- Access Edge Computing in 5G Networks

    Get PDF
    Cloud Radio Access Networks (CRAN) and Multi-Access Edge Computing (MEC) are two of the many emerging technologies that are proposed for 5G mobile networks. CRAN provides scalability, flexibility, and better resource utilization to support the dramatic increase of Internet of Things (IoT) and mobile devices. MEC aims to provide low latency, high bandwidth and real- time access to radio networks. Cloud architecture is built on top of traditional Radio Access Networks (RAN) to bring the idea of CRAN and in MEC, cloud computing services are brought near users to improve the user’s experiences. A cache is added in both CRAN and MEC architectures to speed up the mobile network services. This research focuses on cache management of CRAN and MEC because there is a necessity to manage and utilize this limited cache resource efficiently. First, a new cache management algorithm, H-EXD-AHP (Hierarchical Exponential Decay and Analytical Hierarchy Process), is proposed to improve the existing EXD-AHP algorithm. Next, this paper designs three dynamic cache management algorithms and they are implemented on the proposed algorithm: H-EXD-AHP and an existing algorithm: H-PBPS (Hierarchical Probability Based Popularity Scoring). In these proposed designs, cache sizes of the different Service Level Agreement (SLA) users are adjusted dynamically to meet the guaranteed cache hit rate set for their corresponding SLA users. The minimum guarantee of cache hit rate is for our setting. Net neutrality, prioritized treatment will be in common practice. Finally, performance evaluation results show that these designs achieve the guaranteed cache hit rate for differentiated users according to their SLA

    A note on optimal performance of page storage

    Get PDF

    CACHE DATA REPLACEMENT POLICY BASED ON RECENTLY USED ACCESS DATA AND EUCLIDEAN DISTANCE

    Get PDF
    Data access management in web-based applications that use relational databases must be well thought out because the data continues to grow every day. The Relational Database Management System (RDBMS) has a relatively slow access speed because the data is stored on disk. This causes problems with decreased database server performance and slow response times. One strategy to overcome this is to implement caching at the application level. This paper proposed SIMGD framework that models Application Level Caching (ALC) to speed up relational data access in web applications. The ALC strategy maps each controller and model that has access to the database into a node-data in the in-Memory Database (IMDB). Not all node-data can be included in IMDB due to limited capacity. Therefore, the SIMGD framework uses the Euclidean distance calculation method for each node-data with its top access data as a cache replacement policy. Node-data with Euclidean distance closer to their top access data have a high priority to be maintained in the caching server. Simulation results show at the 25KB cache configuration, the SIMGD framework excels in achieving hit ratios compared to the LRU algorithm of 6.46% and 6.01%, respectively

    Three-state disk model for high quality and energy efficient streaming media servers

    Get PDF
    Energy conservation and emission reduction is an increasingly prominent and global issue in green computing. Among the various components of a streaming media server, the storage system is the biggest power consumer. In this paper, a Three-State Disk Model (3SDM) is proposed to conserve energy for streaming media servers without losing quality. According to the load threshold, the disks are dynamically divided into three states: overload, normal and standby. With the requests arriving and departing, the disk state transition among these three states. The purpose of 3SDM is to skew the load among the disks to achieve high quality and energy efficiency for streaming media applications. The load of disks in overload state will move to disks in normal state to improve the quality of service (QoS) level. The load of disks in normal state will be packed together to switch some disks into standby state to save energy. The key problem here is to identify the blocks that need migrating among disks. A sliding window replacement (SWR) algorithm is developed for this purpose, which calculates the block weight based on the request frequency falling within the window of a block. Employing a validated simulator, this paper evaluates the SWR algorithm for conventional disks based on the proposed 3SDM model. The results show that this scheme is able to yield energy efficient streaming media servers

    EFFECT ON 360 DEGREE VIDEO STREAMING WITH CACHING AND WITHOUT CACHING

    Get PDF
    People all around the world are becoming more and more accustomed to watching 360-degree videos, which offer a way to experience virtual reality. While watching videos, it enables users to view video scenes from any perspective. To reduce bandwidth costs and provide the video with less latency, 360-degree video caching at the edge server may be a smart option. A hypothetical 360-degree video streaming system can partition popular video materials into tiles that are cached at the edge server. This study uses the Least Recently Used (LRU) and Least Frequently Used (LFU) algorithms to accomplish video caching and suggest a system architecture for 360-degree video caching. Two 360-degree videos from 48 users\u27 head movements are used in the experiment, and caching between the LRU cache and LFU cache is compared by changing the cache size. The findings demonstrate that, for varied cache sizes, utilizing LFU caching outperforms LRU caching in terms of average cache hit rate. In the first part of the research, we compared LRU and LFU caching algorithm. In the second part of the research, a suitable caching strategy model was developed based on user’s field of view. Field of view (FoV) is the term used to describe the portion of the 3600 videos that viewers typically see when watching 3600 videos. Edge caching can be a smart way to increase customer satisfaction while maximizing bandwidth usage (QoE). A 3600-video caching strategy has been developed in this study using three machine learning models that use random forest, linear regression, and Bayesian regression. As features, tiles\u27 frequency, user\u27s view prediction probability, and resolution were used. The created machine learning models are designed to decide the caching method for 360-degree video tiles. The models can forecast the frequency of viewing for 3600 video tiles (subsets of a full video). With a predictive R2 value of 0.79, the random forest regression model performs better than the other suggested models when the outcomes of the three developed models are compared. In the third part of the research, to compare our machine learning algorithm with LRU algorithm, a python test bench program was written to evaluate both algorithms on the test set by varying the cache size. The results demonstrate that our machine learning approach, which was created for 360-degree video caching, outperforms the LRU algorithm
    • …
    corecore