4,529 research outputs found
Offloading Content with Self-organizing Mobile Fogs
Mobile users in an urban environment access content on the internet from
different locations. It is challenging for the current service providers to
cope with the increasing content demand from a large number of collocated
mobile users. In-network caching to offload content at nodes closer to users
alleviate the issue, though efficient cache management is required to find out
who should cache what, when and where in an urban environment, given nodes
limited computing, communication and caching resources. To address this, we
first define a novel relation between content popularity and availability in
the network and investigate a node's eligibility to cache content based on its
urban reachability. We then allow nodes to self-organize into mobile fogs to
increase the distributed cache and maximize content availability in a
cost-effective manner. However, to cater rational nodes, we propose a coalition
game for the nodes to offer a maximum "virtual cache" assuming a monetary
reward is paid to them by the service/content provider. Nodes are allowed to
merge into different spatio-temporal coalitions in order to increase the
distributed cache size at the network edge. Results obtained through
simulations using realistic urban mobility trace validate the performance of
our caching system showing a ratio of 60-85% of cache hits compared to the
30-40% obtained by the existing schemes and 10% in case of no coalition
Compound popular content caching strategy to enhance the cache management performance in named data networking
Named Data Networking (NDN) is a leading research paradigm for the future Internet architecture. The NDN offers in-network cache which is the most beneficial feature to reduce the difficulties of the location-based Internet paradigm. The
objective of cache is to achieve a scalable, effective, and consistent distribution of information. However, the main issue which NDN facing is the selection of appropriate router during the contentâs transmission that can disrupt the overall network performance. The reason is that how each router takes a decision to the cache which content needs to cache at what location that can enhance the complete caching performance. Therefore, several cache management strategies have been
developed. Still, it is not clear which caching strategy is the most ideal for each situation. This study proposes a new cache management strategy named as Compound Popular Content Caching Strategy (CPCCS) to minimize cache redundancy with enhanced diversity ratio and improving the accessibility of cached content by providing short stretch paths. The CPCCS was developed by combining two mechanisms named as Compound Popular Content Selection (CPCS) and Compound Popular Content Caching (CPCC) to differentiate the contents regarding their Interest frequencies using dynamic threshold and to find the best possible caching positions respectively. CPCCS is compared with other NDN-based caching strategies, such as Max-Gain In-network Caching, WAVE popularity-based caching strategy, Hop-based Probabilistic Caching, Leaf Popular Down, Most Popular Cache, and Cache Capacity Aware Caching in a simulation environment. The results show that the CPCCS performs better in which the diversity and cache hit ratio are increased by 34% and 14% respectively. In addition, the redundancy and path stretch are decreased by 44% and 46% respectively. The outcomes showed that the CPCCS have achieved enhanced caching performance with respect to different cache size (1GB to 10GB) and simulation parameters than other caching strategies. Thus, CPCCS can be applicable in future for the NDN-based emerging technologies such as Internet of Things, fog and edge computing
Recommended from our members
Towards Optimized Traffic Provisioning and Adaptive Cache Management for Content Delivery
Content delivery networks (CDNs) deploy hundreds of thousands of servers around the world to cache and serve trillions of user requests every day for a diverse set of content such as web pages, videos, software downloads and images. In this dissertation, we propose algorithms to provision traffic across cache servers and manage the content they host to achieve performance objectives such as maximizing the cache hit rate, minimizing the bandwidth cost of the network and minimizing the energy consumption of the servers.
Traffic provisioning is the process of determining the set of content domains hosted on the servers. We propose footprint descriptors that effectively capture the popularity characteristics and caching performance of different content classes. We also propose a footprint descriptor calculus that can be used to decide how content should be mixed or partitioned to efficiently provision traffic. To automate traffic provisioning, we propose optimization models to provision traffic such that the cache miss traffic from the network is minimized without overloading the servers. We find that such optimization models produce significant reductions in the cache miss traffic when compared with traffic provisioning algorithms in use today.
Cache management is the process of deciding how content is cached in the servers of a CDN. We propose TTL-based caching algorithms that provably achieve performance targets specified by a CDN operator. We show that the proposed algorithms converge to the target hit rate and target cache size with low error. Finally, we propose cache management algorithms to make the servers energy-efficient using disk shutdown. We find that disk shutdown is well suited for CDN servers and provides energy savings without significantly impacting cache hit rates
Offloading Content with Self-organizing Mobile Fogs
International audienceMobile users in an urban environment access content on the internet from different locations. It is challenging for the current service providers to cope with the increasing content demand from a large number of collocated mobile users. In-network caching to offload content at nodes closer to users alleviate the issue, though efficient cache management is required to find out who should cache what, when and where in an urban environment, given nodes limited computing, communication and caching resources. To address this, we first define a novel relation between content popularity and availability in the network and investigate a node's eligibility to cache content based on its urban reachability. We then allow nodes to self-organize into mobile fogs to increase the distributed cache and maximize content availability in a cost-effective manner. However, to cater rational nodes, we propose a coalition game for the nodes to offer a maximum " virtual cache " assuming a monetary reward is paid to them by the service/content provider. Nodes are allowed to merge into different spatio-temporal coalitions in order to increase the distributed cache size at the network edge. Results obtained through simulations using realistic urban mobility trace validate the performance of our caching system showing a ratio of 60 â 85% of cache hits compared to the 30 â 40% obtained by the existing schemes and 10% in case of no coalition
Efficient Traffic Management Algorithms for the Core Network using Device-to-Device Communication and Edge Caching
Exponentially growing number of communicating devices and the need for faster, more reliable and secure communication are becoming major challenges for current mobile communication architecture. More number of connected devices means more bandwidth and a need for higher Quality of Service (QoS) requirements, which bring new challenges in terms of resource and traffic management. Traffic offload to the edge has been introduced to tackle this demand-explosion that let the core network offload some of the contents to the edge to reduce the traffic congestion. Device-to-Device (D2D) communication and edge caching, has been proposed as promising solutions for offloading data. D2D communication refers to the communication infrastructure where the users in proximity communicate with each other directly. D2D communication improves overall spectral efficiency, however, it introduces additional interference in the system. To enable D2D communication, efficient resource allocation must be introduced in order to minimize the interference in the system and this benefits the system in terms of bandwidth efficiency. In the first part of this thesis, low complexity resource allocation algorithm using stable matching is proposed to optimally assign appropriate uplink resources to the devices in order to minimize interference among D2D and cellular users.
Edge caching has recently been introduced as a modification of the caching scheme in the core network, which enables a cellular Base Station (BS) to keep copies of the contents in order to better serve users and enhance Quality of Experience (QoE). However, enabling BSs to cache data on the edge of the network brings new challenges especially on deciding on which and how the contents should be cached. Since users in the same cell may share similar content-needs, we can exploit this temporal-spatial correlation in the favor of caching system which is referred to local content popularity. Content popularity is the most important factor in the caching scheme which helps the BSs to cache appropriate data in order to serve the users more efficiently. In the edge caching scheme, the BS does not know the users request-pattern in advance. To overcome this bottleneck, a content popularity prediction using Markov Decision Process (MDP) is proposed in the second part of this thesis to let the BS know which data should be cached in each time-slot. By using the proposed scheme, core network access request can be significantly reduced and it works better than caching based on historical data in both stable and unstable content popularity
Cooperative announcement-based caching for video-on-demand streaming
Recently, video-on-demand (VoD) streaming services like Netflix and Hulu have gained a lot of popularity. This has led to a strong increase in bandwidth capacity requirements in the network. To reduce this network load, the design of appropriate caching strategies is of utmost importance. Based on the fact that, typically, a video stream is temporally segmented into smaller chunks that can be accessed and decoded independently, cache replacement strategies have been developed that take advantage of this temporal structure in the video. In this paper, two caching strategies are proposed that additionally take advantage of the phenomenon of binge watching, where users stream multiple consecutive episodes of the same series, reported by recent user behavior studies to become the everyday behavior. Taking into account this information allows us to predict future segment requests, even before the video playout has started. Two strategies are proposed, both with a different level of coordination between the caches in the network. Using a VoD request trace based on binge watching user characteristics, the presented algorithms have been thoroughly evaluated in multiple network topologies with different characteristics, showing their general applicability. It was shown that in a realistic scenario, the proposed election-based caching strategy can outperform the state-of-the-art by 20% in terms of cache hit ratio while using 4% less network bandwidth
Optimal Caching and Routing in Hybrid Networks
Hybrid networks consisting of MANET nodes and cellular infrastructure have
been recently proposed to improve the performance of military networks. Prior
work has demonstrated the benefits of in-network content caching in a wired,
Internet context. We investigate the problem of developing optimal routing and
caching policies in a hybrid network supporting in-network caching with the
goal of minimizing overall content-access delay. Here, needed content may
always be accessed at a back-end server via the cellular infrastructure;
alternatively, content may also be accessed via cache-equipped "cluster" nodes
within the MANET. To access content, MANET nodes must thus decide whether to
route to in-MANET cluster nodes or to back-end servers via the cellular
infrastructure; the in-MANET cluster nodes must additionally decide which
content to cache. We model the cellular path as either i) a
congestion-insensitive fixed-delay path or ii) a congestion-sensitive path
modeled as an M/M/1 queue. We demonstrate that under the assumption of
stationary, independent requests, it is optimal to adopt static caching (i.e.,
to keep a cache's content fixed over time) based on content popularity. We also
show that it is optimal to route to in-MANET caches for content cached there,
but to route requests for remaining content via the cellular infrastructure for
the congestion-insensitive case and to split traffic between the in-MANET
caches and cellular infrastructure for the congestion-sensitive case. We
develop a simple distributed algorithm for the joint routing/caching problem
and demonstrate its efficacy via simulation.Comment: submitted to Milcom 201
- âŠ