4 research outputs found

    Revealing the Evolution of a Cloud Provider Through its Network Weather Map

    Full text link
    peer reviewedResearchers often face the lack of data on large operational networks to understand how they are used, how they behave, and sometimes how they fail. This data is crucial to drive the evolution of Internet protocols and develop techniques such as traffic engineering, DDoS detection and mitigation. Companies that have access to measurements from operational networks and services leverage this data to improve the availability, speed, and resilience of their Internet services. Unfortunately, the availability of large datasets, especially collected regularly over a long period of time, is a daunting task that remains scarce in the literature. We tackle this problem by releasing a dataset collected over roughly two years of observations of a major cloud company (OVH). Our dataset, called OVH Weather dataset, represents the evolution of more than 180 routers, 1,100 internal links, 500 external links, and their load percentages in the backbone network over time. Our dataset has a high density with snapshots taken every five minutes, totaling more than 500,000 files. In this paper, we also illustrate how our dataset could be used to study the backbone networks evolution. Finally, our dataset opens several exciting research questions that we make available to the research community

    Context-based security function orchestration for the network edge

    Get PDF
    Over the last few years the number of interconnected devices has increased dramatically, generating zettabytes of traffic each year. In order to cater to the requirements of end-users, operators have deployed network services to enhance their infrastructure. Nowadays, telecommunications service providers are making use of virtualised, flexible, and cost-effective network-wide services, under what is known as Network Function Virtualisation (NFV). Future network and application requirements necessitate services to be delivered at the edge of the network, in close proximity to end-users, which has the potential to reduce end-to-end latency and minimise the utilisation of the core infrastructure while providing flexible allocation of resources. One class of functionality that NFV facilitates is the rapid deployment of network security services. However, the urgency for assuring connectivity to an ever increasing number of devices as well as their resource-constrained nature, has led to neglecting security principles and best practices. These low-cost devices are often exploited for malicious purposes in targeting the network infrastructure, with recent volumetric Distributed Denial of Service (DDoS) attacks often surpassing 1 terabyte per second of network traffic. The work presented in this thesis aims to identify the unique requirements of security modules implemented as Virtual Network Functions (VNFs), and the associated challenges in providing management and orchestration of complex chains consisting of multiple VNFs The work presented here focuses on deployment, placement, and lifecycle management of microservice-based security VNFs in resource-constrained environments using contextual information on device behaviour. Furthermore, the thesis presents a formulation of the latency-optimal placement of service chains at the network edge, provides an optimal solution using Integer Linear Programming, and an associated near-optimal heuristic solution that is able to solve larger-size problems in reduced time, which can be used in conjunction with context-based security paradigms. The results of this work demonstrate that lightweight security VNFs can be tailored for, and hosted on, a variety of devices, including commodity resource-constrained systems found in edge networks. Furthermore, using a context-based implementation of the management and orchestration of lightweight services enables the deployment of real-world complex security service chains tailored towards the user’s performance demands from the network. Finally, the results of this work show that on-path placement of service chains reduces the end-to-end latency and minimise the number of service-level agreement violations, therefore enabling secure use of latency-critical networks

    A survey on Machine Learning Techniques for Routing Optimization in SDN

    Get PDF
    In conventional networks, there was a tight bond between the control plane and the data plane. The introduction of Software-Defined Networking (SDN) separated these planes, and provided additional features and tools to solve some of the problems of traditional network (i.e., latency, consistency, efficiency). SDN is a flexible networking paradigm that boosts network control, programmability and automation. It proffers many benefits in many areas, including routing. More specifically, for efficiently organizing, managing and optimizing routing in networks, some intelligence is required, and SDN offers the possibility to easily integrate it. To this purpose, many researchers implemented different machine learning (ML) techniques to enhance SDN routing applications. This article surveys the use of ML techniques for routing optimization in SDN based on three core categories (i.e. supervised learning, unsupervised learning, and reinforcement learning). The main contributions of this survey are threefold. Firstly, it presents detailed summary tables related to these studies and their comparison is also discussed, including a summary of the best works according to our analysis. Secondly, it summarizes the main findings, best works and missing aspects, and it includes a quick guideline to choose the best ML technique in this field (based on available resources and objectives). Finally, it provides specific future research directions divided into six sections to conclude the survey. Our conclusion is that there is a huge trend to use intelligence-based routing in programmable networks, particularly during the last three years, but a lot of effort is still required to achieve comprehensive comparisons and synergies of approaches, meaningful evaluations based on open datasets and topologies, and detailed practical implementations (following recent standards) that could be adopted by industry. In summary, future efforts should be focused on reproducible research rather than on new isolated ideas. Otherwise, most of these applications will be barely implemented in practice
    corecore