243 research outputs found

    Performance Evaluation of Caching Policies in NDN - an ICN Architecture

    Full text link
    Information Centric Networking (ICN) advocates the philosophy of accessing the content independent of its location. Owing to this location independence in ICN, the routers en-route can be enabled to cache the content to serve the future requests for the same content locally. Several ICN architectures have been proposed in the literature along with various caching algorithms for caching and cache replacement at the routers en-route. The aim of this paper is to critically evaluate various caching policies using Named Data Networking (NDN), an ICN architecture proposed in literature. We have presented the performance comparison of different caching policies naming First In First Out (FIFO), Least Recently Used (LRU), and Universal Caching (UC) in two network models; Watts-Strogatz (WS) model (suitable for dense short link networks such as sensor networks) and Sprint topology (better suited for large Internet Service Provider (ISP) networks) using ndnSIM, an ns3 based discrete event simulator for NDN architecture. Our results indicate that UC outperforms other caching policies such as LRU and FIFO and makes UC a better alternative for both sensor networks and ISP networks

    The Road Ahead for Networking: A Survey on ICN-IP Coexistence Solutions

    Full text link
    In recent years, the current Internet has experienced an unexpected paradigm shift in the usage model, which has pushed researchers towards the design of the Information-Centric Networking (ICN) paradigm as a possible replacement of the existing architecture. Even though both Academia and Industry have investigated the feasibility and effectiveness of ICN, achieving the complete replacement of the Internet Protocol (IP) is a challenging task. Some research groups have already addressed the coexistence by designing their own architectures, but none of those is the final solution to move towards the future Internet considering the unaltered state of the networking. To design such architecture, the research community needs now a comprehensive overview of the existing solutions that have so far addressed the coexistence. The purpose of this paper is to reach this goal by providing the first comprehensive survey and classification of the coexistence architectures according to their features (i.e., deployment approach, deployment scenarios, addressed coexistence requirements and architecture or technology used) and evaluation parameters (i.e., challenges emerging during the deployment and the runtime behaviour of an architecture). We believe that this paper will finally fill the gap required for moving towards the design of the final coexistence architecture.Comment: 23 pages, 16 figures, 3 table

    Content Delivery Latency of Caching Strategies for Information-Centric IoT

    Full text link
    In-network caching is a central aspect of Information-Centric Networking (ICN). It enables the rapid distribution of content across the network, alleviating strain on content producers and reducing content delivery latencies. ICN has emerged as a promising candidate for use in the Internet of Things (IoT). However, IoT devices operate under severe constraints, most notably limited memory. This means that nodes cannot indiscriminately cache all content; instead, there is a need for a caching strategy that decides what content to cache. Furthermore, many applications in the IoT space are timesensitive; therefore, finding a caching strategy that minimises the latency between content request and delivery is desirable. In this paper, we evaluate a number of ICN caching strategies in regards to latency and hop count reduction using IoT devices in a physical testbed. We find that the topology of the network, and thus the routing algorithm used to generate forwarding information, has a significant impact on the performance of a given caching strategy. To the best of our knowledge, this is the first study that focuses on latency effects in ICN-IoT caching while using real IoT hardware, and the first to explicitly discuss the link between routing algorithm, network topology, and caching effects.Comment: 10 pages, 9 figures, journal pape

    Caching Techniques in Next Generation Cellular Networks

    Get PDF
    Content caching will be an essential feature in the next generations of cellular networks. Indeed, a network equipped with caching capabilities allows users to retrieve content with reduced access delays and consequently reduces the traffic passing through the network backhaul. However, the deployment of the caching nodes in the network is hindered by the following two challenges. First, the storage space of a cache is limited as well as expensive. So, it is not possible to store in the cache every content that can be possibly requested by the user. This calls for efficient techniques to determine the contents that must be stored in the cache. Second, efficient ways are needed to implement and control the caching node. In this thesis, we investigate caching techniques focussing to address the above-mentioned challenges, so that the overall system performance is increased. In order to tackle the challenge of the limited storage capacity, smart proactive caching strategies are needed. In the context of vehicular users served by edge nodes, we believe a caching strategy should be adapted to the mobility characteristics of the cars. In this regard, we propose a scheme called RICH (RoadsIde CacHe), which optimally caches content at the edge nodes where connected vehicles require it most. In particular, our scheme is designed to ensure in-order delivery of content chunks to end users. Unlike blind popularity decisions, the probabilistic caching used by RICH considers vehicular trajectory predictions as well as content service time by edge nodes. We evaluate our approach on realistic mobility datasets against a popularity-based edge approach called POP, and a mobility-aware caching strategy known as netPredict. In terms of content availability, our RICH edge caching scheme provides an enhancement of up to 33% and 190% when compared with netPredict and POP respectively. At the same time, the backhaul penalty bandwidth is reduced by a factor ranging between 57% and 70%. Caching node is an also a key component in Named Data Networking (NDN) that is an innovative paradigm to provide content based services in future networks. As compared to legacy networks, naming of network packets and in-network caching of content make NDN more feasible for content dissemination. However, the implementation of NDN requires drastic changes to the existing network infrastructure. One feasible approach is to use Software Defined Networking (SDN), according to which the control of the network is delegated to a centralized controller, which configures the forwarding data plane. This approach leads to large signaling overhead as well as large end-to-end (e2e) delays. In order to overcome these issues, in this work, we provide an efficient way to implement and control the NDN node. We propose to enable NDN using a stateful data plane in the SDN network. In particular, we realize the functionality of an NDN node using a stateful SDN switch attached with a local cache for content storage, and use OpenState to implement such an approach. In our solution, no involvement of the controller is required once the OpenState switch has been configured. We benchmark the performance of our solution against the traditional SDN approach considering several relevant metrics. Experimental results highlight the benefits of a stateful approach and of our implementation, which avoids signaling overhead and significantly reduces e2e delays
    corecore