1,263 research outputs found

    Generalized Virtual Networking: an enabler for Service Centric Networking and Network Function Virtualization

    Full text link
    In this paper we introduce the Generalized Virtual Networking (GVN) concept. GVN provides a framework to influence the routing of packets based on service level information that is carried in the packets. It is based on a protocol header inserted between the Network and Transport layers, therefore it can be seen as a layer 3.5 solution. Technically, GVN is proposed as a new transport layer protocol in the TCP/IP protocol suite. An IP router that is not GVN capable will simply process the IP destination address as usual. Similar concepts have been proposed in other works, and referred to as Service Oriented Networking, Service Centric Networking, Application Delivery Networking, but they are now generalized in the proposed GVN framework. In this respect, the GVN header is a generic container that can be adapted to serve the needs of arbitrary service level routing solutions. The GVN header can be managed by GVN capable end-hosts and applications or can be pushed/popped at the edge of a GVN capable network (like a VLAN tag). In this position paper, we show that Generalized Virtual Networking is a powerful enabler for SCN (Service Centric Networking) and NFV (Network Function Virtualization) and how it couples with the SDN (Software Defined Networking) paradigm

    An integrated authentication and authorization approach for the network of information architecture

    Get PDF
    Several projects propose an information centric approach to the network of the future. Such an approach makes efficient content distribution possible by making information retrieval host-independent and integration into the network storage for caching information. Requests for particular content can, thus, be satisfied by any host or server holding a copy. One well-established approach of information centric networks is the Network of Information (NetInf) architecture, developed as part of the EU FP7 project SAIL. The approach is based on the Publish/Subscribe model, where hosts can join a network, publish data, and subscribe to publications. The NetInf introduces two main stages namely, the Publication and Data Retrieval through which hosts publish and retrieve data. Also, a distributed Name Resolution System (NRS) has been introduced to map the data to its publishers. The NRS is vulnerable to masquerading and content poisoning attacks through invalid data registration. Therefore, the paper proposes a Registration stage to take place before the publication and data retrieval stage. This new stage will identify and authenticate hosts before being able to access the NetInf system. Furthermore, the Registration stage uses (cap)abilities-based access policy to mitigate the issue of unauthorized access to data objects. The proposed solutions have been formally verified using formal methods approac

    An integrated authentication and authorization approach for the network of information architecture

    Get PDF
    Several projects propose an information centric approach to the network of the future. Such an approach makes efficient content distribution possible by making information retrieval host-independent and integration into the network storage for caching information. Requests for particular content can, thus, be satisfied by any host or server holding a copy. One well-established approach of information centric networks is the Network of Information (NetInf) architecture, developed as part of the EU FP7 project SAIL. The approach is based on the Publish/Subscribe model, where hosts can join a network, publish data, and subscribe to publications. The NetInf introduces two main stages namely, the Publication and Data Retrieval through which hosts publish and retrieve data. Also, a distributed Name Resolution System (NRS) has been introduced to map the data to its publishers. The NRS is vulnerable to masquerading and content poisoning attacks through invalid data registration. Therefore, the paper proposes a Registration stage to take place before the publication and data retrieval stage. This new stage will identify and authenticate hosts before being able to access the NetInf system. Furthermore, the Registration stage uses (cap)abilities-based access policy to mitigate the issue of unauthorized access to data objects. The proposed solutions have been formally verified using formal methods approac

    Edge Data Repositories - The design of a store-process-send system at the Edge

    Get PDF
    The Edge of the Internet is currently accommodating large numbers of devices and these numbers will dramatically increase with the advancement of technology. Edge devices and their associated service bandwidth requirements are predicted to become a major problem in the near future. As a result, the popularity of data management, analysis and processing at the edges is also increasing. This paper proposes Edge Data Repositories and their performance analysis. In this context, provide a service quality and resource allocation feedback algorithm for the processing and storage capabilities of Edge Data Repositories. A suitable simulation environment was created for this system, with the help of the ONE Simulator. The simulations were further used to evaluate the Edge Data Repository cluster within different scenarios, providing a range of service models. From there, with the help and adaptation of a few basic networks management concepts, the feedback algorithm was developed. As an initial step, we assess and provide measurable performance feedback for the most essential parts of our envisioned system: network metrics and service and resource status, through this algorithm

    Content caching in ICN using Bee-Colony optimization algorithm

    Get PDF
    Information dissemination has recently been overtaken by the huge media-driven data shared across different platforms.Future Internet shall greatly be concerned about pervasion and ubiquity of data on all devices.Information-Centric Network seems the challenging paradigm that aims at guaranteeing the flexibility needed when the data explosion occurs.Caching is thus an option that provides the flexibility that manages data exchange practices. Different caching issues has raised concern about the content flooded all over the Internet.In line with the challenges, Bee-Colony Optimization Algorithm (B-COA) has been proposed in this paper to avail content on the Internet with less referral cost and heavy monopoly of data on hosts.It is believed that the advantages of the grouping and waggle phase could be used to place the contents faster in ICN
    • …
    corecore