72 research outputs found
Ultra-reliable Low-latency, Energy-efficient and Computing-centric Software Data Plane for Network Softwarization
Network softwarization plays a significantly important role in the development and deployment of the latest communication system for 5G and beyond. A more flexible and intelligent network architecture can be enabled to provide support for agile network management, rapid launch of innovative network services with much reduction in Capital Expense (CAPEX) and Operating Expense (OPEX). Despite these benefits, 5G system also raises unprecedented challenges as emerging machine-to-machine and human-to-machine communication use cases require Ultra-Reliable Low Latency Communication (URLLC). According to empirical measurements performed by the author of this dissertation on a practical testbed, State of the Art (STOA) technologies and systems are not able to achieve the one millisecond end-to-end latency requirement of the 5G standard on Commercial Off-The-Shelf (COTS) servers. This dissertation performs a comprehensive introduction to three innovative approaches that can be used to improve different aspects of the current software-driven network data plane. All three approaches are carefully designed, professionally implemented and rigorously evaluated. According to the measurement results, these novel approaches put forward the research in the design and implementation of ultra-reliable low-latency, energy-efficient and computing-first software data plane for 5G communication system and beyond
Recommended from our members
QOE-AWARE CONTENT DISTRIBUTION SYSTEMS FOR ADAPTIVE BITRATE VIDEO STREAMING
A prodigious increase in video streaming content along with a simultaneous rise in end system capabilities has led to the proliferation of adaptive bit rate video streaming users in the Internet. Today, video streaming services range from Video-on-Demand services like traditional IP TV to more recent technologies such as immersive 3D experiences for live sports events. In order to meet the demands of these services, the multimedia and networking research community continues to strive toward efficiently delivering high quality content across the Internet while also trying to minimize content storage and delivery costs.
The introduction of flexible and adaptable technologies such as compute and storage clouds, Network Function Virtualization and Software Defined Networking continue to fuel content provider revenue. Today, content providers such as Google and Facebook build their own Software-Defined WANs to efficiently serve millions of users worldwide, while NetFlix partners with ISPs such as ATT (using OpenConnect) and cloud providers such as Amazon EC2 to serve their content and manage the delivery of several petabytes of high-quality video content for millions of subscribers at a global scale, respectively. In recent years, the unprecedented growth of video traffic in the Internet has seen several innovative systems such as Software Defined Networks and Information Centric Networks as well as inventive protocols such as QUIC, in an effort to keep up with the effects of this remarkable growth. While most existing systems continue to sub-optimally satisfy user requirements, future video streaming systems will require optimal management of storage and bandwidth resources that are several orders of magnitude larger than what is implemented today. Moreover, Quality-of-Experience metrics are becoming increasingly fine-grained in order to accurately quantify diverse content and consumer needs.
In this dissertation, we design and investigate innovative adaptive bit rate video streaming systems and analyze the implications of recent technologies on traditional streaming approaches using real-world experimentation methods. We provide useful insights for current and future content distribution network administrators to tackle Quality-of-Experience dilemmas and serve high quality video content to several users at a global scale. In order to show how Quality-of-Experience can benefit from core network architectural modifications, we design and evaluate prototypes for video streaming in Information Centric Networks and Software-Defined Networks. We also present a real-world, in-depth analysis of adaptive bitrate video streaming over protocols such as QUIC and MPQUIC to show how end-to-end protocol innovation can contribute to substantial Quality-of-Experience benefits for adaptive bit rate video streaming systems. We investigate a cross-layer approach based on QUIC and observe that application layer-based information can be successfully used to determine transport layer parameters for ABR streaming applications
Recommended from our members
Experimentation on Dynamic Congestion Control in Software Defined Networking (SDN) and Network Function Virtualisation (NFV)
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonIn this thesis, a novel framework for dynamic congestion control has been
proposed. The study is about the congestion control in broadband communication
networks. Congestion results when demand temporarily exceeds capacity and leads to
severe degradation of Quality of Service (QoS) and possibly loss of traffic. Since traffic
is stochastic in nature, high demand may arise anywhere in a network and possibly
causing congestion. There are different ways to mitigate the effects of congestion, by
rerouting, by aggregation to take advantage of statistical multiplexing, and by discarding
too demanding traffic, which is known as admission control. This thesis will try to
accommodate as much traffic as possible, and study the effect of routing and aggregation
on a rather general mix of traffic types. Software Defined Networking (SDN) and Network Function Virtualization (NFV) are concepts that allow for dynamic configuration of network resources by
decoupling control from payload data and allocation of network functions to the most suitable physical node. This allows implementation of a centralised control that takes the
state of the entire network into account and configures nodes dynamically to avoid
congestion. Assumes that node controls can be expressed in commands supported by
OpenFlow v1.3. Due to state dependencies in space and time, the network dynamics are
very complex, and resort to a simulation approach. The load in the network depends on
many factors, such as traffic characteristics and the traffic matrix, topology and node
capacities. To be able to study the impact of control functions, some parts of the
environment is fixed, such as the topology and the node capacities, and statistically
average the traffic distribution in the network by randomly generated traffic matrices. The
traffic consists of approximately equal intensity of smooth, bursty and long memory
traffic. By designing an algorithm that route traffic and configure queue resources so that
delay is minimised, this thesis chooses the delay to be the optimisation parameter because
it is additive and real-time applications are delay sensitive. The optimisation being studied
both with respect to total end-to-end delay and maximum end-to-end delay. The delay is used as link weights and paths are determined by Dijkstra’s algorithm. Furthermore, nodes are configured to serve the traffic optimally which in turn depends on the routing. The proposed algorithm is a fixed-point system of equations that iteratively evaluates routing – aggregation – delay until an equilibrium point is found.
Three strategies are compared: static node configuration where each queue is
allocated 1/3 of the node resources and no aggregation, aggregation of real-time (taken
as smooth and bursty) traffic onto the same queue, and dynamic aggregation based on the
entropy of the traffic streams and their aggregates. The results of the simulation study
show good results, with gains of 10-40% in the QoS parameters. By simulation, the
positive effects of the proposed routing and aggregation strategy and the usefulness of the
algorithm. The proposed algorithm constitutes the central control logic, and the resulting
control actions are realisable through the SDN/NFV architecture
On the design of a cost-efficient resource management framework for low latency applications
The ability to offer low latency communications is one of the critical design requirements for the upcoming 5G era. The current practice for achieving low latency is to overprovision network resources (e.g., bandwidth and computing resources). However, this approach is not cost-efficient, and cannot be applied in large-scale. To solve this, more cost-efficient resource management is required to dynamically and efficiently exploit network resources to guarantee low latencies. The advent of network virtualization provides novel opportunities in achieving cost-efficient low latency communications. It decouples network resources from physical machines through virtualization, and groups resources in the form of virtual machines (VMs). By doing so, network resources can be flexibly increased at any network locations through VM auto-scaling to alleviate network delays due to lack of resources. At the same time, the operational cost can be largely reduced by shutting down low-utilized VMs (e.g., energy saving). Also, network virtualization enables the emerging concept of mobile edge-computing, whereby VMs can be utilized to host low latency applications at the network edge to shorten communication latency. Despite these advantages provided by virtualization, a key challenge is the optimal resource management of different physical and virtual resources for low latency communications. This thesis addresses the challenge by deploying a novel cost-efficient resource management framework that aims to solve the cost-efficient design of 1) low latency communication infrastructures; 2) dynamic resource management for low latency applications; and 3) fault-tolerant resource management. Compared to the current practices, the proposed framework achieves 80% of deployment cost reduction for the design of low latency communication infrastructures; continuously saves up to 33% of operational cost through dynamic resource management while always achieving low latencies; and succeeds in providing fault tolerance to low latency communications with a guaranteed operational cost
Performance Benchmarking of State-of-the-Art Software Switches for NFV
With the ultimate goal of replacing proprietary hardware appliances with
Virtual Network Functions (VNFs) implemented in software, Network Function
Virtualization (NFV) has been gaining popularity in the past few years.
Software switches route traffic between VNFs and physical Network Interface
Cards (NICs). It is of paramount importance to compare the performance of
different switch designs and architectures. In this paper, we propose a
methodology to compare fairly and comprehensively the performance of software
switches. We first explore the design spaces of seven state-of-the-art software
switches and then compare their performance under four representative test
scenarios. Each scenario corresponds to a specific case of routing NFV traffic
between NICs and/or VNFs. In our experiments, we evaluate the throughput and
latency between VNFs in two of the most popular virtualization environments,
namely virtual machines (VMs) and containers. Our experimental results show
that no single software switch prevails in all scenarios. It is, therefore,
crucial to choose the most suitable solution for the given use case. At the
same time, the presented results and analysis provide a deeper insight into the
design tradeoffs and identifies potential performance bottlenecks that could
inspire new designs.Comment: 17 page
Gestion flexible des ressources dans les réseaux de nouvelle génération avec SDN
Abstract : 5G and beyond-5G/6G are expected to shape the future economic growth of multiple vertical industries by providing the network infrastructure required to enable innovation and new business models. They have the potential to offer a wide spectrum of services, namely higher data rates, ultra-low latency, and high reliability. To achieve their promises, 5G and beyond-5G/6G rely on software-defined networking (SDN), edge computing, and radio access network (RAN) slicing technologies. In this thesis, we aim to use SDN as a key enabler to enhance resource management in next-generation networks. SDN allows programmable management of edge computing resources and dynamic orchestration of RAN slicing. However, achieving efficient performance based on SDN capabilities is a challenging task due to the permanent fluctuations of traffic in next-generation networks and the diversified quality of service requirements of emerging applications. Toward our objective, we address the load balancing problem in distributed SDN architectures, and we optimize the RAN slicing of communication and computation resources in the edge of the network. In the first part of this thesis, we present a proactive approach to balance the load in a distributed SDN control plane using the data plane component migration mechanism. First, we propose prediction models that forecast the load of SDN controllers in the long term. By using these models, we can preemptively detect whether the load will be unbalanced in the control plane and, thus, schedule migration operations in advance. Second, we improve the migration operation performance by optimizing the tradeoff between a load balancing factor and the cost of migration operations. This proactive load balancing approach not only avoids SDN controllers from being overloaded, but also allows a judicious selection of which data plane component should be migrated and where the migration should happen. In the second part of this thesis, we propose two RAN slicing schemes that efficiently allocate the communication and the computation resources in the edge of the network. The first RAN slicing scheme performs the allocation of radio resource blocks (RBs) to end-users in two time-scales, namely in a large time-scale and in a small time-scale. In the large time-scale, an SDN controller allocates to each base station a number of RBs from a shared radio RBs pool, according to its requirements in terms of delay and data rate. In the short time-scale, each base station assigns its available resources to its end-users and requests, if needed, additional resources from adjacent base stations. The second RAN slicing scheme jointly allocates the RBs and computation resources available in edge computing servers based on an open RAN architecture. We develop, for the proposed RAN slicing schemes, reinforcement learning and deep reinforcement learning algorithms to dynamically allocate RAN resources.La 5G et au-delà de la 5G/6G sont censées dessiner la future croissance économique de multiples industries verticales en fournissant l'infrastructure réseau nécessaire pour permettre l'innovation et la création de nouveaux modèles économiques. Elles permettent d'offrir un large spectre de services, à savoir des débits de données plus élevés, une latence ultra-faible et une fiabilité élevée. Pour tenir leurs promesses, la 5G et au-delà de la-5G/6G s'appuient sur le réseau défini par logiciel (SDN), l’informatique en périphérie et le découpage du réseau d'accès (RAN). Dans cette thèse, nous visons à utiliser le SDN en tant qu'outil clé pour améliorer la gestion des ressources dans les réseaux de nouvelle génération. Le SDN permet une gestion programmable des ressources informatiques en périphérie et une orchestration dynamique de découpage du RAN. Cependant, atteindre une performance efficace en se basant sur le SDN est une tâche difficile due aux fluctuations permanentes du trafic dans les réseaux de nouvelle génération et aux exigences de qualité de service diversifiées des applications émergentes. Pour atteindre notre objectif, nous abordons le problème de l'équilibrage de charge dans les architectures SDN distribuées, et nous optimisons le découpage du RAN des ressources de communication et de calcul à la périphérie du réseau. Dans la première partie de cette thèse, nous présentons une approche proactive pour équilibrer la charge dans un plan de contrôle SDN distribué en utilisant le mécanisme de migration des composants du plan de données. Tout d'abord, nous proposons des modèles pour prédire la charge des contrôleurs SDN à long terme. En utilisant ces modèles, nous pouvons détecter de manière préemptive si la charge sera déséquilibrée dans le plan de contrôle et, ainsi, programmer des opérations de migration à l'avance. Ensuite, nous améliorons les performances des opérations de migration en optimisant le compromis entre un facteur d'équilibrage de charge et le coût des opérations de migration. Cette approche proactive d'équilibrage de charge permet non seulement d'éviter la surcharge des contrôleurs SDN, mais aussi de choisir judicieusement le composant du plan de données à migrer et l'endroit où la migration devrait avoir lieu. Dans la deuxième partie de cette thèse, nous proposons deux mécanismes de découpage du RAN qui allouent efficacement les ressources de communication et de calcul à la périphérie des réseaux. Le premier mécanisme de découpage du RAN effectue l'allocation des blocs de ressources radio (RBs) aux utilisateurs finaux en deux échelles de temps, à savoir dans une échelle de temps large et dans une échelle de temps courte. Dans l’échelle de temps large, un contrôleur SDN attribue à chaque station de base un certain nombre de RB à partir d'un pool de RB radio partagé, en fonction de ses besoins en termes de délai et de débit. Dans l’échelle de temps courte, chaque station de base attribue ses ressources disponibles à ses utilisateurs finaux et demande, si nécessaire, des ressources supplémentaires aux stations de base adjacentes. Le deuxième mécanisme de découpage du RAN alloue conjointement les RB et les ressources de calcul disponibles dans les serveurs de l’informatique en périphérie en se basant sur une architecture RAN ouverte. Nous développons, pour les mécanismes de découpage du RAN proposés, des algorithmes d'apprentissage par renforcement et d'apprentissage par renforcement profond pour allouer dynamiquement les ressources du RAN
FloWatcher-DPDK: lightweight line-rate flow-level monitoring in software
In the last few years, several software-based solutions have been proved to be very efficient for high-speed packet processing, traffic generation and monitoring, and can be considered valid alternatives to expensive and non-flexible hardware-based solutions. In our work, we first benchmark heterogeneous design choices for software-based packet monitoring systems in terms of achievable performance and required resources (i.e., the number of CPU cores). Building on this extensive analysis we design FloWatcher-DPDK, a DPDK-based high-speed software traffic monitor we provide to the community as an open source project. In a nutshell, FloWatcher-DPDK provides tunable fine-grained statistics at packet and flow levels. Experimental results demonstrate that FloWatcher-DPDK sustains per-flow statistics with 5-nines precision at high-speed (e.g., 14.88 Mpps) using a limited amount of resources. Finally, we showcase the usage of FloWatcher-DPDK by configuring it to analyze the performance of two open source prototypes for stateful flow-level end-host and in-network packet processing
PREDICTING INTERNET TRAFFIC BURSTS USING EXTREME VALUE THEORY
Computer networks play an important role in today’s organization and people life.
These interconnected devices share a common medium and they tend to compete for
it. Quality of Service (QoS) comes into play as to define what level of services users
get. Accurately defining the QoS metrics is thus important.
Bursts and serious deteriorations are omnipresent in Internet and considered as an
important aspects of it. This thesis examines bursts and serious deteriorations in
Internet traffic and applies Extreme Value Theory (EVT) to their prediction and
modelling. EVT itself is a field of statistics that has been in application in fields like
hydrology and finance, with only a recent introduction to the field of
telecommunications. Model fitting is based on real traces from Belcore laboratory
along with some simulated traces based on fractional Gaussian noise and linear
fractional alpha stable motion. QoS traces from University of Napoli are also used in
the prediction stage.
Three methods from EVT are successfully used for the bursts prediction problem.
They are Block Maxima (BM) method, Peaks Over Threshold (POT) method, and RLargest
Order Statistics (RLOS) method. Bursts in internet traffic are predicted using
the above three methods. A clear methodology was developed for the bursts
prediction problem. New metrics for QoS are suggested based on Return Level and
Return Period. Thus, robust QoS metrics can be defined. In turn, a superior QoS will
be obtained that would support mission critical applications
- …