4,910 research outputs found

    Performance Evaluation of Caching Policies in NDN - an ICN Architecture

    Full text link
    Information Centric Networking (ICN) advocates the philosophy of accessing the content independent of its location. Owing to this location independence in ICN, the routers en-route can be enabled to cache the content to serve the future requests for the same content locally. Several ICN architectures have been proposed in the literature along with various caching algorithms for caching and cache replacement at the routers en-route. The aim of this paper is to critically evaluate various caching policies using Named Data Networking (NDN), an ICN architecture proposed in literature. We have presented the performance comparison of different caching policies naming First In First Out (FIFO), Least Recently Used (LRU), and Universal Caching (UC) in two network models; Watts-Strogatz (WS) model (suitable for dense short link networks such as sensor networks) and Sprint topology (better suited for large Internet Service Provider (ISP) networks) using ndnSIM, an ns3 based discrete event simulator for NDN architecture. Our results indicate that UC outperforms other caching policies such as LRU and FIFO and makes UC a better alternative for both sensor networks and ISP networks

    A Content-based Centrality Metric for Collaborative Caching in Information-Centric Fogs

    Get PDF
    Information-Centric Fog Computing enables a multitude of nodes near the end-users to provide storage, communication, and computing, rather than in the cloud. In a fog network, nodes connect with each other directly to get content locally whenever possible. As the topology of the network directly influences the nodes' connectivity, there has been some work to compute the graph centrality of each node within that network topology. The centrality is then used to distinguish nodes in the fog network, or to prioritize some nodes over others to participate in the caching fog. We argue that, for an Information-Centric Fog Computing approach, graph centrality is not an appropriate metric. Indeed, a node with low connectivity that caches a lot of content may provide a very valuable role in the network. To capture this, we introduce acontent-based centrality (CBC) metric which takes into account how well a node is connected to the content the network is delivering, rather than to the other nodes in the network. To illustrate the validity of considering content-based centrality, we use this new metric for a collaborative caching algorithm. We compare the performance of the proposed collaborative caching with typical centrality based, non-centrality based, and non-collaborative caching mechanisms. Our simulation implements CBC on three instances of large scale realistic network topology comprising 2,896 nodes with three content replication levels. Results shows that CBC outperforms benchmark caching schemes and yields a roughly 3x improvement for the average cache hit rate

    Content Delivery Latency of Caching Strategies for Information-Centric IoT

    Full text link
    In-network caching is a central aspect of Information-Centric Networking (ICN). It enables the rapid distribution of content across the network, alleviating strain on content producers and reducing content delivery latencies. ICN has emerged as a promising candidate for use in the Internet of Things (IoT). However, IoT devices operate under severe constraints, most notably limited memory. This means that nodes cannot indiscriminately cache all content; instead, there is a need for a caching strategy that decides what content to cache. Furthermore, many applications in the IoT space are timesensitive; therefore, finding a caching strategy that minimises the latency between content request and delivery is desirable. In this paper, we evaluate a number of ICN caching strategies in regards to latency and hop count reduction using IoT devices in a physical testbed. We find that the topology of the network, and thus the routing algorithm used to generate forwarding information, has a significant impact on the performance of a given caching strategy. To the best of our knowledge, this is the first study that focuses on latency effects in ICN-IoT caching while using real IoT hardware, and the first to explicitly discuss the link between routing algorithm, network topology, and caching effects.Comment: 10 pages, 9 figures, journal pape

    Fog-enabled Edge Learning for Cognitive Content-Centric Networking in 5G

    Full text link
    By caching content at network edges close to the users, the content-centric networking (CCN) has been considered to enforce efficient content retrieval and distribution in the fifth generation (5G) networks. Due to the volume, velocity, and variety of data generated by various 5G users, an urgent and strategic issue is how to elevate the cognitive ability of the CCN to realize context-awareness, timely response, and traffic offloading for 5G applications. In this article, we envision that the fundamental work of designing a cognitive CCN (C-CCN) for the upcoming 5G is exploiting the fog computing to associatively learn and control the states of edge devices (such as phones, vehicles, and base stations) and in-network resources (computing, networking, and caching). Moreover, we propose a fog-enabled edge learning (FEL) framework for C-CCN in 5G, which can aggregate the idle computing resources of the neighbouring edge devices into virtual fogs to afford the heavy delay-sensitive learning tasks. By leveraging artificial intelligence (AI) to jointly processing sensed environmental data, dealing with the massive content statistics, and enforcing the mobility control at network edges, the FEL makes it possible for mobile users to cognitively share their data over the C-CCN in 5G. To validate the feasibility of proposed framework, we design two FEL-advanced cognitive services for C-CCN in 5G: 1) personalized network acceleration, 2) enhanced mobility management. Simultaneously, we present the simulations to show the FEL's efficiency on serving for the mobile users' delay-sensitive content retrieval and distribution in 5G.Comment: Submitted to IEEE Communications Magzine, under review, Feb. 09, 201

    ADN: An Information-Centric Networking Architecture for the Internet of Things

    Full text link
    Forwarding data by name has been assumed to be a necessary aspect of an information-centric redesign of the current Internet architecture that makes content access, dissemination, and storage more efficient. The Named Data Networking (NDN) and Content-Centric Networking (CCNx) architectures are the leading examples of such an approach. However, forwarding data by name incurs storage and communication complexities that are orders of magnitude larger than solutions based on forwarding data using addresses. Furthermore, the specific algorithms used in NDN and CCNx have been shown to have a number of limitations. The Addressable Data Networking (ADN) architecture is introduced as an alternative to NDN and CCNx. ADN is particularly attractive for large-scale deployments of the Internet of Things (IoT), because it requires far less storage and processing in relaying nodes than NDN. ADN allows things and data to be denoted by names, just like NDN and CCNx do. However, instead of replacing the waist of the Internet with named-data forwarding, ADN uses an address-based forwarding plane and introduces an information plane that seamlessly maps names to addresses without the involvement of end-user applications. Simulation results illustrate the order of magnitude savings in complexity that can be attained with ADN compared to NDN.Comment: 10 page

    Offloading Content with Self-organizing Mobile Fogs

    Get PDF
    Mobile users in an urban environment access content on the internet from different locations. It is challenging for the current service providers to cope with the increasing content demand from a large number of collocated mobile users. In-network caching to offload content at nodes closer to users alleviate the issue, though efficient cache management is required to find out who should cache what, when and where in an urban environment, given nodes limited computing, communication and caching resources. To address this, we first define a novel relation between content popularity and availability in the network and investigate a node's eligibility to cache content based on its urban reachability. We then allow nodes to self-organize into mobile fogs to increase the distributed cache and maximize content availability in a cost-effective manner. However, to cater rational nodes, we propose a coalition game for the nodes to offer a maximum "virtual cache" assuming a monetary reward is paid to them by the service/content provider. Nodes are allowed to merge into different spatio-temporal coalitions in order to increase the distributed cache size at the network edge. Results obtained through simulations using realistic urban mobility trace validate the performance of our caching system showing a ratio of 60-85% of cache hits compared to the 30-40% obtained by the existing schemes and 10% in case of no coalition
    • …
    corecore