331 research outputs found

    A Survey on the Contributions of Software-Defined Networking to Traffic Engineering

    Get PDF
    Since the appearance of OpenFlow back in 2008, software-defined networking (SDN) has gained momentum. Although there are some discrepancies between the standards developing organizations working with SDN about what SDN is and how it is defined, they all outline traffic engineering (TE) as a key application. One of the most common objectives of TE is the congestion minimization, where techniques such as traffic splitting among multiple paths or advanced reservation systems are used. In such a scenario, this manuscript surveys the role of a comprehensive list of SDN protocols in TE solutions, in order to assess how these protocols can benefit TE. The SDN protocols have been categorized using the SDN architecture proposed by the open networking foundation, which differentiates among data-controller plane interfaces, application-controller plane interfaces, and management interfaces, in order to state how the interface type in which they operate influences TE. In addition, the impact of the SDN protocols on TE has been evaluated by comparing them with the path computation element (PCE)-based architecture. The PCE-based architecture has been selected to measure the impact of SDN on TE because it is the most novel TE architecture until the date, and because it already defines a set of metrics to measure the performance of TE solutions. We conclude that using the three types of interfaces simultaneously will result in more powerful and enhanced TE solutions, since they benefit TE in complementary ways.European Commission through the Horizon 2020 Research and Innovation Programme (GN4) under Grant 691567 Spanish Ministry of Economy and Competitiveness under the Secure Deployment of Services Over SDN and NFV-based Networks Project S&NSEC under Grant TEC2013-47960-C4-3-

    The Road Ahead for Networking: A Survey on ICN-IP Coexistence Solutions

    Full text link
    In recent years, the current Internet has experienced an unexpected paradigm shift in the usage model, which has pushed researchers towards the design of the Information-Centric Networking (ICN) paradigm as a possible replacement of the existing architecture. Even though both Academia and Industry have investigated the feasibility and effectiveness of ICN, achieving the complete replacement of the Internet Protocol (IP) is a challenging task. Some research groups have already addressed the coexistence by designing their own architectures, but none of those is the final solution to move towards the future Internet considering the unaltered state of the networking. To design such architecture, the research community needs now a comprehensive overview of the existing solutions that have so far addressed the coexistence. The purpose of this paper is to reach this goal by providing the first comprehensive survey and classification of the coexistence architectures according to their features (i.e., deployment approach, deployment scenarios, addressed coexistence requirements and architecture or technology used) and evaluation parameters (i.e., challenges emerging during the deployment and the runtime behaviour of an architecture). We believe that this paper will finally fill the gap required for moving towards the design of the final coexistence architecture.Comment: 23 pages, 16 figures, 3 table

    A policy-based framework towards smooth adaptive playback for dynamic video streaming over HTTP

    Get PDF
    The growth of video streaming in the Internet in the last few years has been highly significant and promises to continue in the future. This fact is related to the growth of Internet users and especially with the diversification of the end-user devices that happens nowadays. Earlier video streaming solutions didn´t consider adequately the Quality of Experience from the user’s perspective. This weakness has been since overcame with the DASH video streaming. The main feature of this protocol is to provide different versions, in terms of quality, of the same content. This way, depending on the status of the network infrastructure between the video server and the user device, the DASH protocol automatically selects the more adequate content version. Thus, it provides to the user the best possible quality for the consumption of that content. The main issue with the DASH protocol is associated to the loop, between each client and video server, which controls the rate of the video stream. In fact, as the network congestion increases, the client requests to the server a video stream with a lower rate. Nevertheless, due to the network latency, the DASH protocol in a standalone way may not be able to stabilize the video stream rate at a level that can guarantee a satisfactory QoE to the end-users. Network programming is a very active and popular topic in the field of network infrastructures management. In this area, the Software Defined Networking paradigm is an approach where a network controller, with a relatively abstracted view of the physical network infrastructure, tries to perform a more efficient management of the data path. The current work studies the combination of the DASH protocol and the Software Defined Networking paradigm in order to achieve a more adequate sharing of the network resources that could benefit both the users’ QoE and network management.O streaming de vídeo na Internet é um fenómeno que tem vindo a crescer de forma significativa nos últimos anos e que promete continuar a crescer no futuro. Este facto está associado ao aumento do número de utilizadores na Internet e, sobretudo, à crescente diversificação de dispositivos que se verifica atualmente. As primeiras soluções utilizadas no streaming de vídeo não acomodavam adequadamente o ponto de vista do utilizador na avaliação da qualidade do vídeo, i.e., a Qualidade de Experiência (QoE) do utilizador. Esta debilidade foi suplantada com o protocolo de streaming de vídeo adaptativo DASH. A principal funcionalidade deste protocolo é fornecer diferente versões, em termos de qualidade, para o mesmo conteúdo. Desta forma, dependendo do estado da infraestrutura de rede entre o servidor de vídeo e o dispositivo do utilizador, o protocolo DASH seleciona automaticamente a versão do conteúdo mais adequada a essas condições. Tal permite fornecer ao utilizador a melhor qualidade possível para o consumo deste conteúdo. O principal problema com o protocolo DASH está associado com o ciclo, entre cada cliente e o servidor de vídeo, que controla o débito de cada fluxo de vídeo. De facto, à medida que a rede fica congestionada, o cliente irá começar a requerer ao servidor um fluxo de vídeo com um débito menor. Ainda assim, devido à latência da rede, o protocolo DASH pode não ser capaz por si só de estabilizar o débito do fluxo de vídeo num nível que consiga garantir uma QoE satisfatória para os utilizadores. A programação de redes é uma área muito popular e ativa na gestão de infraestruturas de redes. Nesta área, o paradigma de Software Defined Networking é uma abordagem onde um controlador da rede, com um ponto de vista relativamente abstrato da infraestrutura física da rede, tenta desempenhar uma gestão mais eficiente do encaminhamento de rede. Neste trabalho estuda-se a junção do protocolo DASH e do paradigma de Software Defined Networking, de forma a atingir uma partilha mais adequada dos recursos da rede. O objetivo é implementar uma solução que seja benéfica tanto para a qualidade de experiência dos utilizadores como para a gestão da rede

    A patient agent controlled customized blockchain based framework for internet of things

    Get PDF
    Although Blockchain implementations have emerged as revolutionary technologies for various industrial applications including cryptocurrencies, they have not been widely deployed to store data streaming from sensors to remote servers in architectures known as Internet of Things. New Blockchain for the Internet of Things models promise secure solutions for eHealth, smart cities, and other applications. These models pave the way for continuous monitoring of patient’s physiological signs with wearable sensors to augment traditional medical practice without recourse to storing data with a trusted authority. However, existing Blockchain algorithms cannot accommodate the huge volumes, security, and privacy requirements of health data. In this thesis, our first contribution is an End-to-End secure eHealth architecture that introduces an intelligent Patient Centric Agent. The Patient Centric Agent executing on dedicated hardware manages the storage and access of streams of sensors generated health data, into a customized Blockchain and other less secure repositories. As IoT devices cannot host Blockchain technology due to their limited memory, power, and computational resources, the Patient Centric Agent coordinates and communicates with a private customized Blockchain on behalf of the wearable devices. While the adoption of a Patient Centric Agent offers solutions for addressing continuous monitoring of patients’ health, dealing with storage, data privacy and network security issues, the architecture is vulnerable to Denial of Services(DoS) and single point of failure attacks. To address this issue, we advance a second contribution; a decentralised eHealth system in which the Patient Centric Agent is replicated at three levels: Sensing Layer, NEAR Processing Layer and FAR Processing Layer. The functionalities of the Patient Centric Agent are customized to manage the tasks of the three levels. Simulations confirm protection of the architecture against DoS attacks. Few patients require all their health data to be stored in Blockchain repositories but instead need to select an appropriate storage medium for each chunk of data by matching their personal needs and preferences with features of candidate storage mediums. Motivated by this context, we advance third contribution; a recommendation model for health data storage that can accommodate patient preferences and make storage decisions rapidly, in real-time, even with streamed data. The mapping between health data features and characteristics of each repository is learned using machine learning. The Blockchain’s capacity to make transactions and store records without central oversight enables its application for IoT networks outside health such as underwater IoT networks where the unattended nature of the nodes threatens their security and privacy. However, underwater IoT differs from ground IoT as acoustics signals are the communication media leading to high propagation delays, high error rates exacerbated by turbulent water currents. Our fourth contribution is a customized Blockchain leveraged framework with the model of Patient-Centric Agent renamed as Smart Agent for securely monitoring underwater IoT. Finally, the smart Agent has been investigated in developing an IoT smart home or cities monitoring framework. The key algorithms underpinning to each contribution have been implemented and analysed using simulators.Doctor of Philosoph

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Quality of service, security and trustworthiness for network slices

    Get PDF
    (English) The telecommunications' systems are becoming much more intelligent and dynamic due to the expansion of the multiple network types (i.e., wired, wireless, Internet of Things (IoT) and cloud-based networks). Due to this network variety, the old model of designing a specific network for a single purpose and so, the coexistence of different and multiple control systems is evolving towards a new model in which the use of a more unified control system is able to offer a wide range of services for multiple purposes with different requirements and characteristics. To achieve this situation, the networks have become more digital and virtual thanks to the creation of the Software-Defined Networking (SDN) and the Network Function Virtualization (NFV).Network Slicing takes the strengths from these two technologies and allows the network control systems to improve their performance as the services may be deployed and their interconnection configured through multiple-transport domains by using NFV/SDN tools such as NFV-Orchestrators (NFV-O) and SDN Controllers. This thesis has the main objective to contribute to the state of the art of Network Slicing, with a special focus on security aspects towards the architectures and processes to deploy, monitor and enforce secured and trusted resources to compose network slices. Finally, this document is structured in eight chapters: Chapter 1 provides the motivation and objectives of this thesis which describes to where this thesis contributes and what it was expected to study, evaluate and research. Chapter 2 presents the background necessary to understand the following chapters. This chapter presents a state of the art with three clear sections: 1) the key technologies necessary to create network slices, 2) an overview about the relationship between Service Level Agreements (SLAs) and network slices with a specific view on Security Service Level Agreements (SSLAs), and, 3) the literature related about distributed architectures and systems and the use of abstraction models to generate trust, security, and avoid management centralization. Chapter 3 introduces the research done associated to Network Slicing. First with the creation of network slices using resources placed multiple computing and transport domains. Then, this chapter illustrates how the use of multiple virtualization technologies allows to have more efficient network slices deployments and where each technology fits better to accomplish the performance improvements. Chapter 4 presents the research done about the management of network slices and the definition of SLAs and SSLAs to define the service and security requirements to accomplish the expected QoS and the right security level. Chapter 5 studies the possibility to change at certain level the trend to centralise the control and management architectures towards a distributed design. Chapter 6 follows focuses on the generation of trust among service resources providers. This chapter first describes how the concept of trust is mapped into an analytical system and then, how the trust management among providers and clients is done in a transparent and fair way. Chapter 7 is devoted to the dissemination results and presents the set of scientific publications produced in the format of journals, international conferences or collaborations. Chapter 8 concludes the work and outcomes previously presented and presents possible future research.(Català) Els sistemes de telecomunicacions s'estan tornant molt més intel·ligents i dinàmics degut a l'expansió de les múltiples classes de xarxes (i.e., xarxes amb i sense fils, Internet of Things (IoT) i xarxes basades al núvol). Tenint en consideració aquesta varietat d'escenaris, el model antic de disseny d'una xarxa enfocada a una única finalitat i, per tant, la una coexistència de varis i diferents sistemes de control està evolucionant cap a un nou model en el qual es busca unificar el control cap a un sistema més unificat capaç d'oferir una amplia gama de serveis amb diferents finalitats, requeriments i característiques. Per assolir aquesta nova situació, les xarxes han hagut de canviar i convertir-se en un element més digitalitzat i virtualitzat degut a la creació de xarxes definides per software i la virtualització de les funcions de xarxa (amb anglès Software-Defined Networking (SDN) i Network Function Virtualization (NFV), respectivament). Network Slicing fa ús dels punts forts de les dues tecnologies anteriors (SDN i NFV) i permet als sistemes de control de xarxes millorar el seu rendiment ja que els serveis poden ser desaplegats i la seva interconnexió a través de múltiples dominis de transport configurada fent servir eines NFV/SDN com per exemple orquestradors NFV (NFV-O) i controladors SDN. Aquesta tesi té com a objectiu principal, contribuir en diferents aspectes a la literatura actual al voltant de les network slices. Més concretament, el focus és en aspectes de seguretat de cara a les arquitectures i processos necessaris per desplegar, monitoritzar i aplicar recursos segurs i fiables per generar network slices. Finalment, el document es divideix en 8 capítols: El Capítol 1correspon a la introducció de la temàtica principal, la motivació per estudiar-la i els objectius plantejats a l'inici dels estudis de doctorat. El Capítol 2 presenta un recull d'elements i exemples en la literatura actual per presentar els conceptes bàsics i necessaris en relació a les tecnologies NFV, SDN i Network Slicing. El Capítol 3 introdueix el lector a les tasques i resultats obtinguts per l'estudiant respecte l'ús de network slices enfocades en escenaris amb múltiples dominis de transport i posteriorment en la creació i gestió de network slices Híbrides que utilitzen diferents tecnologies de virtualització. El Capítol 4 s'enfoca en l'ús d’eines de monitorització tant en avaluar i assegurar que es compleixen els nivells esperats de qualitat del servei i sobretot de qualitat de seguretat de les network slices desplegades. Per fer-ho s'estudia l'ús de contractes de servei i de seguretat, en anglès: Service Level Agreements i Security Service Level Agreements. El Capítol 5 estudia la possibilitat de canviar el model d'arquitectura per tal de no seguir centralitzant la gestió de tots els dominis en un únic element, aquest capítol presenta la feina feta en l'ús del Blockchain com a eina per canviar el model de gestió de recursos de múltiples dominis cap a un punt de vista cooperatiu i transparent entre dominis. El Capítol 6 segueix el camí iniciat en el capítol anterior i presenta un escenari en el qual a part de tenir múltiples dominis, també tenim múltiples proveïdors oferint un mateix servei (multi-stakeholder). En aquest cas, l'objectiu del Blockchain passa a ser la generació, gestió i distribució de paràmetres de reputació que defineixin un nivell de fiabilitat associat a cada proveïdor. De manera que, quan un client vulgui demanar un servei, pugui veure quins proveïdors són més fiables i en quins aspectes tenen millor reputació. El Capítol 7 presenta les tasques de disseminació fetes al llarg de la tesi. El Capítol 8 finalitza la tesi amb les conclusions finals.Postprint (published version

    A Survey of Machine Learning Techniques for Video Quality Prediction from Quality of Delivery Metrics

    Get PDF
    A growing number of video streaming networks are incorporating machine learning (ML) applications. The growth of video streaming services places enormous pressure on network and video content providers who need to proactively maintain high levels of video quality. ML has been applied to predict the quality of video streams. Quality of delivery (QoD) measurements, which capture the end-to-end performances of network services, have been leveraged in video quality prediction. The drive for end-to-end encryption, for privacy and digital rights management, has brought about a lack of visibility for operators who desire insights from video quality metrics. In response, numerous solutions have been proposed to tackle the challenge of video quality prediction from QoD-derived metrics. This survey provides a review of studies that focus on ML techniques for predicting the QoD metrics in video streaming services. In the context of video quality measurements, we focus on QoD metrics, which are not tied to a particular type of video streaming service. Unlike previous reviews in the area, this contribution considers papers published between 2016 and 2021. Approaches for predicting QoD for video are grouped under the following headings: (1) video quality prediction under QoD impairments, (2) prediction of video quality from encrypted video streaming traffic, (3) predicting the video quality in HAS applications, (4) predicting the video quality in SDN applications, (5) predicting the video quality in wireless settings, and (6) predicting the video quality in WebRTC applications. Throughout the survey, some research challenges and directions in this area are discussed, including (1) machine learning over deep learning; (2) adaptive deep learning for improved video delivery; (3) computational cost and interpretability; (4) self-healing networks and failure recovery. The survey findings reveal that traditional ML algorithms are the most widely adopted models for solving video quality prediction problems. This family of algorithms has a lot of potential because they are well understood, easy to deploy, and have lower computational requirements than deep learning techniques
    • …
    corecore