8,589 research outputs found

    CRAID: Online RAID upgrades using dynamic hot data reorganization

    Get PDF
    Current algorithms used to upgrade RAID arrays typically require large amounts of data to be migrated, even those that move only the minimum amount of data required to keep a balanced data load. This paper presents CRAID, a self-optimizing RAID array that performs an online block reorganization of frequently used, long-term accessed data in order to reduce this migration even further. To achieve this objective, CRAID tracks frequently used, long-term data blocks and copies them to a dedicated partition spread across all the disks in the array. When new disks are added, CRAID only needs to extend this process to the new devices to redistribute this partition, thus greatly reducing the overhead of the upgrade process. In addition, the reorganized access patterns within this partition improve the array’s performance, amortizing the copy overhead and allowing CRAID to offer a performance competitive with traditional RAIDs. We describe CRAID’s motivation and design and we evaluate it by replaying seven real-world workloads including a file server, a web server and a user share. Our experiments show that CRAID can successfully detect hot data variations and begin using new disks as soon as they are added to the array. Also, the usage of a dedicated partition improves the sequentiality of relevant data access, which amortizes the cost of reorganizations. Finally, we prove that a full-HDD CRAID array with a small distributed partition (<1.28% per disk) can compete in performance with an ideally restriped RAID-5 and a hybrid RAID-5 with a small SSD cache.Peer ReviewedPostprint (published version

    NOMA Assisted Wireless Caching: Strategies and Performance Analysis

    Full text link
    Conventional wireless caching assumes that content can be pushed to local caching infrastructure during off-peak hours in an error-free manner; however, this assumption is not applicable if local caches need to be frequently updated via wireless transmission. This paper investigates a new approach to wireless caching for the case when cache content has to be updated during on-peak hours. Two non-orthogonal multiple access (NOMA) assisted caching strategies are developed, namely the push-then-deliver strategy and the push-and-deliver strategy. In the push-then-deliver strategy, the NOMA principle is applied to push more content files to the content servers during a short time interval reserved for content pushing in on-peak hours and to provide more connectivity for content delivery, compared to the conventional orthogonal multiple access (OMA) strategy. The push-and-deliver strategy is motivated by the fact that some users' requests cannot be accommodated locally and the base station has to serve them directly. These events during the content delivery phase are exploited as opportunities for content pushing, which further facilitates the frequent update of the files cached at the content servers. It is also shown that this strategy can be straightforwardly extended to device-to-device caching, and various analytical results are developed to illustrate the superiority of the proposed caching strategies compared to OMA based schemes

    Online Coded Caching

    Full text link
    We consider a basic content distribution scenario consisting of a single origin server connected through a shared bottleneck link to a number of users each equipped with a cache of finite memory. The users issue a sequence of content requests from a set of popular files, and the goal is to operate the caches as well as the server such that these requests are satisfied with the minimum number of bits sent over the shared link. Assuming a basic Markov model for renewing the set of popular files, we characterize approximately the optimal long-term average rate of the shared link. We further prove that the optimal online scheme has approximately the same performance as the optimal offline scheme, in which the cache contents can be updated based on the entire set of popular files before each new request. To support these theoretical results, we propose an online coded caching scheme termed coded least-recently sent (LRS) and simulate it for a demand time series derived from the dataset made available by Netflix for the Netflix Prize. For this time series, we show that the proposed coded LRS algorithm significantly outperforms the popular least-recently used (LRU) caching algorithm.Comment: 15 page

    Cooperative Multi-Bitrate Video Caching and Transcoding in Multicarrier NOMA-Assisted Heterogeneous Virtualized MEC Networks

    Get PDF
    Cooperative video caching and transcoding in mobile edge computing (MEC) networks is a new paradigm for future wireless networks, e.g., 5G and 5G beyond, to reduce scarce and expensive backhaul resource usage by prefetching video files within radio access networks (RANs). Integration of this technique with other advent technologies, such as wireless network virtualization and multicarrier non-orthogonal multiple access (MC-NOMA), provides more flexible video delivery opportunities, which leads to enhancements both for the network's revenue and for the end-users' service experience. In this regard, we propose a two-phase RAF for a parallel cooperative joint multi-bitrate video caching and transcoding in heterogeneous virtualized MEC networks. In the cache placement phase, we propose novel proactive delivery-aware cache placement strategies (DACPSs) by jointly allocating physical and radio resources based on network stochastic information to exploit flexible delivery opportunities. Then, for the delivery phase, we propose a delivery policy based on the user requests and network channel conditions. The optimization problems corresponding to both phases aim to maximize the total revenue of network slices, i.e., virtual networks. Both problems are non-convex and suffer from high-computational complexities. For each phase, we show how the problem can be solved efficiently. We also propose a low-complexity RAF in which the complexity of the delivery algorithm is significantly reduced. A Delivery-aware cache refreshment strategy (DACRS) in the delivery phase is also proposed to tackle the dynamically changes of network stochastic information. Extensive numerical assessments demonstrate a performance improvement of up to 30% for our proposed DACPSs and DACRS over traditional approaches.Comment: 53 pages, 24 figure
    • …
    corecore