30 research outputs found

    Content delivery network for secure of software defined networking by using IPv4, OpenFlow, and ALTO

    Get PDF
    Software defined networking is a programmability function network by easiness for maintenance and configuration. The administrators of network can change the traffic rules during the commuting process. SDN is an arising network structure with programmability and centralization and this leads to introduce potential security concerns. Though the TLS ability support secure for control plane but computationally aggravating and complex to configure as well as not compatible with OpenFlow protocol. For this reason, a content delivery network can be used to increase the ability of network services dynamically and automatically. In order that relieve the threat we proposed architecture for SDN depending on CDN. In our proposed architecture, we use application layer traffic optimization (ALTO) protocol to be as servers enable mapping for the network to produce a summarized vision. We also hide the identity of the forwarding devices by take advantage of IPv4 and OpenFlow transaction identification fields into the control packets through implement of two authentication structures via efficient Salsa20 stream cipher. Finally, the work results explain the proposed architecture can efficiently eliminate of attack types and provide more detectability to attackers

    SDN Controller Mechanisms for Flexible and Customized Networking

    Get PDF
    Software-Defined Networking (SDN) is seen as the most promising networking technology today. The spread of a new technology depends on the acceptance of the engineers implementing the networks. Typically, when engineers start the conceptualization of new network devices that work with a new paradigm, and that should provide expected business values, they must identify and utilize technical enablers for the defined business use cases. This paper tries to summarize essential SDN applications and defines the technical enablers for advanced and efficient SDN networking. To this end, we identify the core technical mechanisms, expecting to provide a useful analysis for the design of new SDN networks

    Cache as a service:leveraging SDN to efficiently and transparently support Video-on-Demand on the last mile

    Get PDF
    High quality online video streaming, both live and on-demand, has become an essential part of consumers’ every-day lives. The popularity of video streaming as placed a heavy burden on the network infrastructure that now has to transfer an enormous amount of data very quickly to the end-user. To further exacerbate the situation, the Video-on-Demand (VoD) distribution paradigm uses a unicast independent flow for each user request. This results in multiple duplicate flows carrying the same video assets many times end-to-end. We present OpenCache: a highly configurable, efficient and transparent in-network caching service that aims to improve the VoD distribution efficiency by caching video assets as close to the end-user as possible. OpenCache leverages Software Defined Networking to benefit last mile environments by improving network utilisation and increasing the Quality of Experience for the end-user. Our evaluation on a pan-European OpenFlow testbed uses adaptive video streaming and demonstrates that with the use of OpenCache, the external link utilisation is reduced by 100%. Furthermore the streaming application receives better quality video and observes higher throughput, lower latency and shorter start up and buffering times

    End-to-end resource management for federated delivery of multimedia services

    Get PDF
    Recently, the Internet has become a popular platform for the delivery of multimedia content. Currently, multimedia services are either offered by Over-the-top (OTT) providers or by access ISPs over a managed IP network. As OTT providers offer their content across the best-effort Internet, they cannot offer any Quality of Service (QoS) guarantees to their users. On the other hand, users of managed multimedia services are limited to the relatively small selection of content offered by their own ISP. This article presents a framework that combines the advantages of both existing approaches, by dynamically setting up federations between the stakeholders involved in the content delivery process. Specifically, the framework provides an automated mechanism to set up end-to-end federations for QoS-aware delivery of multimedia content across the Internet. QoS contracts are automatically negotiated between the content provider, its customers, and the intermediary network domains. Additionally, a federated resource reservation algorithm is presented, which allows the framework to identify the optimal set of stakeholders and resources to include within a federation. Its goal is to minimize delivery costs for the content provider, while satisfying customer QoS requirements. Moreover, the presented framework allows intermediary storage sites to be included in these federations, supporting on-the-fly deployment of content caches along the delivery paths. The algorithm was thoroughly evaluated in order to validate our approach and assess the merits of including intermediary storage sites. The results clearly show the benefits of our method, with delivery cost reductions of up to 80 % in the evaluated scenario

    The growing complexity of content delivery networks: Challenges and implications for the Internet ecosystem

    Get PDF
    Since the commercialization of the Internet, content and related applications, including video streaming, news, advertisements, and social interaction have moved online. It is broadly recognized that the rise of all of these different types of content (static and dynamic, and increasingly multimedia) has been one of the main forces behind the phenomenal growth of the Internet, and its emergence as essential infrastructure for how individuals across the globe gain access to the content sources they want. To accelerate the delivery of diverse content in the Internet and to provide commercial-grade performance for video delivery and the Web, Content Delivery Networks (CDNs) were introduced. This paper describes the current CDN ecosystem and the forces that have driven its evolution. We outline the different CDN architectures and consider their relative strengths and weaknesses. Our analysis highlights the role of location, the growing complexity of the CDN ecosystem, and their relationship to and implications for interconnection markets.EC/H2020/679158/EU/Resolving the Tussle in the Internet: Mapping, Architecture, and Policy Making/ResolutioNe

    Development of a system compliant with the Application-Layer Traffic Optimization Protocol

    Get PDF
    Dissertação de mestrado integrado em Engenharia InformáticaWith the ever-increasing Internet usage that is following the start of the new decade, the need to optimize this world-scale network of computers becomes a big priority in the technological sphere that has the number of users rising, as are the Quality of Service (QoS) demands by applications in domains such as media streaming or virtual reality. In the face of rising traffic and stricter application demands, a better understand ing of how Internet Service Providers (ISPs) should manage their assets is needed. An important concern regards to how applications utilize the underlying network infras tructure over which they reside. Most of these applications act with little regard for ISP preferences, as exemplified by their lack of care in achieving traffic locality during their operation, which would be a preferable feature for network administrators, and that could also improve application performance. However, even a best-effort attempt by applications to cooperate will hardly succeed if ISP policies aren’t clearly commu nicated to them. Therefore, a system to bridge layer interests has much potential in helping achieve a mutually beneficial scenario. The main focus of this thesis is the Application-Layer Traffic Optimization (ALTO) work ing group, which was formed by the Internet Engineering Task Force (IETF) to explore standardizations for network information retrieval. This group specified a request response protocol where authoritative entities provide resources containing network status information and administrative preferences. Sharing of infrastructural insight is done with the intent of enabling a cooperative environment, between the network overlay and underlay, during application operations, to obtain better infrastructural re sourcefulness and the consequential minimization of the associated operational costs. This work gives an overview of the historical network tussle between applications and service providers, presents the ALTO working group’s project as a solution, im plements an extended system built upon their ideas, and finally verifies the developed system’s efficiency, in a simulation, when compared to classical alternatives.Com o acrescido uso da Internet que acompanha o início da nova década, a necessidade de otimizar esta rede global de computadores passa a ser uma grande prioridade na esfera tecnológica que vê o seu número de utilizadores a aumentar, assim como a exigência, por parte das aplicações, de novos padrões de Qualidade de Serviço (QoS), como visto em domínios de transmissão de conteúdo multimédia em tempo real e em experiências de realidade virtual. Face ao aumento de tráfego e aos padrões de exigência aplicacional mais restritos, é necessário melhor compreender como os fornecedores de serviços Internet (ISPs) devem gerir os seus recursos. Um ponto fulcral é como aplicações utilizam os seus recursos da rede, onde muitas destas não têm consideração pelas preferências dos ISPs, como exemplificado pela sua falta de esforço em localizar tráfego, onde o contrário seria preferível por administradores de rede e teria potencial para melhorar o desempenho aplicacional. Uma tentativa de melhor esforço, por parte das aplicações, em resolver este problema, não será bem-sucedida se as preferências administrativas não forem claramente comunicadas. Portanto, um sistema que sirva de ponte de comunicação entre camadas pode potenciar um cenário mutuamente benéfico. O foco principal desta tese é o grupo de trabalho Application-Layer Traffic Optimization (ALTO), que foi formado pelo Internet Engineering Task Force (IETF) para explorar estandardizações para recolha de informação da rede. Este grupo especificou um protocolo onde entidades autoritárias disponibilizam recursos com informação de estado de rede, e preferências administrativas. A partilha de conhecimento infraestrutural é feita para possibilitar um ambiente cooperativo entre redes overlay e underlay, para uma mais eficiente utilização de recursos e a consequente minimização de custos operacionais. É pretendido dar uma visão da histórica disputa entre aplicações e ISPs, assim como apresentar o projeto do grupo de trabalho ALTO como solução, implementar e melhorar sobre as suas ideias, e finalmente verificar a eficiência do sistema numa simulação, quando comparado com alternativas clássicas

    Network Slicing Landscape: A holistic architectural approach, orchestration and management with applicability in mobile and fixed networks and clouds

    Get PDF
    Tutorial at IEEE NetSoft2018 - 29th June 2018 Montreal Abstract: A holistic architectural approach, orchestration and management with applicability in mobile and fixed networks and clouds Topics: Key Slicing concepts and history Slicing Key Characteristics & Usage scenarios & Value Chain Multi-Domain Network Function Virtualisation Review of Research projects and results in network and cloud slicing Open Source Orchestrators Standard Organization activities: NGMN, ITU-T, ONF, 3GPP, ETSI, BBF, IETF Industrial perspective on Network Slicing Review of industry Use Cases Network Slicing Challenges Concluding remarks of Network Slicing Acknowledgements & Reference

    The Road Ahead for Networking: A Survey on ICN-IP Coexistence Solutions

    Full text link
    In recent years, the current Internet has experienced an unexpected paradigm shift in the usage model, which has pushed researchers towards the design of the Information-Centric Networking (ICN) paradigm as a possible replacement of the existing architecture. Even though both Academia and Industry have investigated the feasibility and effectiveness of ICN, achieving the complete replacement of the Internet Protocol (IP) is a challenging task. Some research groups have already addressed the coexistence by designing their own architectures, but none of those is the final solution to move towards the future Internet considering the unaltered state of the networking. To design such architecture, the research community needs now a comprehensive overview of the existing solutions that have so far addressed the coexistence. The purpose of this paper is to reach this goal by providing the first comprehensive survey and classification of the coexistence architectures according to their features (i.e., deployment approach, deployment scenarios, addressed coexistence requirements and architecture or technology used) and evaluation parameters (i.e., challenges emerging during the deployment and the runtime behaviour of an architecture). We believe that this paper will finally fill the gap required for moving towards the design of the final coexistence architecture.Comment: 23 pages, 16 figures, 3 table

    A Survey on the Contributions of Software-Defined Networking to Traffic Engineering

    Get PDF
    Since the appearance of OpenFlow back in 2008, software-defined networking (SDN) has gained momentum. Although there are some discrepancies between the standards developing organizations working with SDN about what SDN is and how it is defined, they all outline traffic engineering (TE) as a key application. One of the most common objectives of TE is the congestion minimization, where techniques such as traffic splitting among multiple paths or advanced reservation systems are used. In such a scenario, this manuscript surveys the role of a comprehensive list of SDN protocols in TE solutions, in order to assess how these protocols can benefit TE. The SDN protocols have been categorized using the SDN architecture proposed by the open networking foundation, which differentiates among data-controller plane interfaces, application-controller plane interfaces, and management interfaces, in order to state how the interface type in which they operate influences TE. In addition, the impact of the SDN protocols on TE has been evaluated by comparing them with the path computation element (PCE)-based architecture. The PCE-based architecture has been selected to measure the impact of SDN on TE because it is the most novel TE architecture until the date, and because it already defines a set of metrics to measure the performance of TE solutions. We conclude that using the three types of interfaces simultaneously will result in more powerful and enhanced TE solutions, since they benefit TE in complementary ways.European Commission through the Horizon 2020 Research and Innovation Programme (GN4) under Grant 691567 Spanish Ministry of Economy and Competitiveness under the Secure Deployment of Services Over SDN and NFV-based Networks Project S&NSEC under Grant TEC2013-47960-C4-3-

    Unikernels Everywhere: The Case for Elastic CDNs

    Full text link
    peer reviewedVideo streaming dominates the Internet’s overall traffic mix, with reports stating that it will constitute 90% of all consumer traffic by 2019. Most of this video is delivered by Content Delivery Networks (CDNs), and, while they optimize QoE metrics such as buffering ratio and start-up time, no single CDN provides optimal performance. In this paper we make the case for elastic CDNs, the ability to build virtual CDNs on-the-fly on top of shared, third-party infrastructure at a scale. To bring this idea closer to reality we begin by large-scale simulations to quantify the effects that elastic CDNs would have if deployed, and build and evaluate MiniCache, a specialized, minimalistic virtualized content cache that runs on the Xen hypervisor. MiniCache is able to serve content at rates of up to 32 Gb/s and handle up to 600K reqs/sec on a single CPU core, as well as boot in about 90 milliseconds on x86 and around 370 milliseconds on ARM32
    corecore