10 research outputs found

    SCHEMA: Service Chain Elastic Management with distributed reinforcement learning

    Get PDF
    As the demand for Network Function Virtualization accelerates, service providers are expected to advance the way they manage and orchestrate their network services to offer lower latency services to their future users. Modern services require complex data flows between Virtual Network Functions, placed in separate network domains, risking an increase in latency that compromises the offered latency constraints. This shift requires high levels of automation to deal with the scale and load of future networks. In this paper, we formulate the Service Function Chaining (SFC) placement problem and then we tackle it by introducing SCHEMA, a Distributed Reinforcement Learning (RL) algorithm that performs complex SFC orchestration for low latency services. We combine multiple RL agents with a Bidding Mechanism to enable scalability on multi-domain networks. Finally, we use a simulation model to evaluate SCHEMA, and we demonstrate its ability to obtain a 60.54% reduction of average service latency when compared to a centralised RL solution.Peer ReviewedPostprint (author's final draft

    QVIA-SDN: Towards QoS-Aware Virtual Infrastructure Allocation on SDN-based Clouds

    Get PDF
    International audienceVirtual Infrastructures (VIs) emerged as a potential solution for network evolution and cloud services provisioning on the Internet. Deploying VIs, however, is still challenging mainly due to a rigid management of networking resources. By splitting control and data planes, Software-Defined Networks (SDN) enable custom and more flexible management, allowing for reducing data center usage , as well as providing mechanisms to guarantee bandwidth and latency control on switches and endpoints. However, reaping the benefits of SDN for VI embedding in cloud data centers is not trivial. Allocation frameworks require combined information from the control plan (e.g., isolation policies, flow identification) and data (e.g., storage capacity, flow table configuration) to find a suitable solution. In this context, the present work proposes a mixed integer programming formulation for the VI allocation problem that considers the main challenges regarding SDN-based cloud data centers. Some constraints are then relaxed resulting in a linear program, for which a heuristic is introduced. Experimental results of the mechanism, termed as QVIA-SDN, highlight that an SDN-aware allocation solution can reduce the data center usage and improve the quality-of-service perceived by hosted tenants

    Empirical Evaluation of Cloud IAAS Platforms using System-level Benchmarks

    Get PDF
    Cloud Computing is an emerging paradigm in the field of computing where scalable IT enabled capabilities are delivered ‘as-a-service’ using Internet technology. The Cloud industry adopted three basic types of computing service models based on software level abstraction: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS). Infrastructure-as-a-Service allows customers to outsource fundamental computing resources such as servers, networking, storage, as well as services where the provider owns and manages the entire infrastructure. This allows customers to only pay for the resources they consume. In a fast-growing IaaS market with multiple cloud platforms offering IaaS services, the user\u27s decision on the selection of the best IaaS platform is quite challenging. Therefore, it is very important for organizations to evaluate and compare the performance of different IaaS cloud platforms in order to minimize cost and maximize performance. Using a vendor-neutral approach, this research focused on four of the top IaaS cloud platforms- Amazon EC2, Microsoft Azure, Google Compute Engine, and Rackspace cloud services. This research compared the performance of IaaS cloud platforms using system-level parameters including server, file I/O, and network. System-level benchmarking provides an objective comparison of the IaaS cloud platforms from performance perspective. Unixbench, Dbench, and Iperf are the system-level benchmarks chosen to test the performance of the server, file I/O, and network respectively. In order to capture the performance variability, the benchmark tests were performed at different time periods on weekdays and weekends. Each IaaS platform\u27s performance was also tested using various parameters. The benchmark tests conducted on different virtual machine (VM) configurations should help cloud users select the best IaaS platform for their needs. Also, based on their applications\u27 requirements, cloud users should get a clearer picture of which VM configuration they should choose. In addition to the performance evaluation, the price-per-performance value of all the IaaS cloud platforms was also examined

    Kevyt menetelmä ohjelmistojen pilviyhteensopivuuden arvioimiseen

    Get PDF
    Cloud services have gained popularity in the past few years, and many companies are offering their software as a service. Cloud environments offer scalability, and it is indeed easy to start using a cloud service instead of acquiring the required hardware. However, some architectural patterns are better in a cloud environment than others. Business critical software that has existed for a long time, such as the operations and business support systems (OSS/BSS) of telecommunication operators, may require extensive changes in order to stay competitive and gain the benefits of cloud environments. The number of mobile device and Internet users continues to grow, and the scalability provided by cloud environments could help OSS/BSS systems handle the growing load. This thesis focuses on the opportunities that cloud provides, and problems faced by companies looking for ways to move their mature products to a cloud environment. Moving software from customer premises to a cloud introduces security and latency problems, but would offer benefits with scalability, if the legacy software can be transformed to a cloud compatible architecture, such as microservices architecture. Such a transition also affects the way the software is developed and how it is deployed. The result of this thesis is a method for evaluating the cloud compatibility of a software product. The method was also used to evaluate the feasibility of deploying Comptel InstantLink to a cloud environment. The architecture of Comptel InstantLink requires changes so that it could be automatically scaled. However, cloud environments would provide value to the users of Comptel InstantLink. A private cloud environment deployed to the telecommunication operator's own infrastructure would be a suitable environment for Comptel InstantLink. The method created in this thesis proved to be a useful starting point for evaluating cloud compatibility, and helps detecting the main areas of concern in cloud migration

    Algorithmen zum effizienten Deployment virtueller Netzwerkservices

    Get PDF
    Die Virtualisierung von Netzfunktionen (NFV, Network Function Virtualization) ist ein zentrales Konzept zukünftiger Mobilfunknetze: Statt wie in klassischen Netzen rein auf Hardwarekomponenten zu setzen, deren Logik untrennbar mit der eigentlichen Hardware verwoben ist, wird die Funktionalität in NFV-Netzen innerhalb virtueller Netzwerkfunktionen gekapselt und von der eigentlichen physischen Hardware separiert. Hochspezialisierte Hardwareboxen werden durch viel flexiblere Standardhardware ersetzt, auf der nun unterschiedliche Netzwerkfunktionen installiert werden können. Ein Kernkonzept dabei ist die Integration von Cloud Computing-Technologien innerhalb der Mobilfunk-Kerninfrastruktur: Dies ermöglicht es dem Mobilfunkprovider, die Konfiguration des Netzes viel dynamischer an die sich ständig verändernden Anforderungen des Marktes anzupassen. Sollen neue Netzwerkservices installiert werden, kann dazu ein Großteil der bereits vorhandenen physischen Infrastruktur wiederverwendet werden; die vorhandene Hardware muss nicht komplett ausgetauscht werden. Die Integration neuer Services erfolgt stattdessen durch den wesentlich kosteneffizienteren Austausch von (virtuellen) Netzwerkfunktionen -- und nicht durch Austausch von Hardware. Mobilfunkprovider werden in Zukunft in der Lage sein, viel einfacher und effizienter zusätzliche Netzfunktionen dort zuzuschalten, wo sie gebraucht werden, ohne dass jedes Mal Änderungen an der eigentlichen Hardware-Konfiguration erforderlich werden. Darüber hinaus können Netzfunktionen flexibel auf andere Komponenten migriert werden, wenn Hardwarekomponenten aus wartungstechnischen Gründen temporär oder dauerhaft außer Betrieb genommen werden. Die vorliegende Arbeit befasst sich mit der Thematik, wie sich derartige virtuelle Netzwerkservices innerhalb des physischen Netzwerks der Provider einbetten lassen. Im Mittelpunkt steht die Frage, auf welchen Hardwarekomponenten die verschiedenen (virtuellen) Netzwerkfunktionen installiert werden sollen. Aus theoretischer Sicht ist die optimale Berechnung eines solchen Deployments ein NP-hartes Optimierungsproblem. Optimale Algorithmen zur Lösung dieses Problems sind daher nur in sehr kleinen Szenarien anwendbar. Für die effiziente Lösung im Zusammenhang mit Szenarien realer Größenordnung kommen aus diesem Grund nur heuristische Ansätze in Betracht, die für die Bestimmung eines guten, aber nicht zwingenderweise optimalen Deployments entworfen werden. Die vorliegende Arbeit befasst sich mit der effizienten, heuristischen Lösung dieses NP-harten Deployment-Problems. Es wird zunächst eine Simulationsumgebung beschrieben, die die umfassende Evaluation von Deploymentalgorithmen ermöglicht. Anders als bisherige Simulationstools lässt sich die hier beschriebene Umgebung sehr einfach um neue Funktionen erweitern. Daran anschließend wird ein verteilter Deployment-Algorithmus vorgestellt, der virtuelle Netze innerhalb von Cloud-Infrastrukturen effizient einbetten kann (DPVNE, Distributed and Parallel Virtual Network Embedding). Kernidee hinter diesem Ansatz ist die Aufteilung der physischen Cloud-Infrastruktur in hierarchisch organisierte Netzwerkpartitionen. Dies ermöglicht die parallele Einbettung virtueller Netze. Durch die Verteilung des Berechnungsaufwands auf mehrere Knoten lässt sich das Deployment-Problem auch in Szenarien mit sehr großen Netzwerkinfrastrukturen lösen. Darüber hinaus wird ein Backtracking-basierter Algorithmus vorgestellt, mit dem das Deployment virtueller Netzwerkservices in NFV-Szenarien durchgeführt werden kann (CoordVNF, Coordinated deployment of Virtual Network Functions). In NFV-Szenarien liegt der Fokus auf der Betrachtung der Netzwerkservices, die zur Verarbeitung von Datenströmen innerhalb der Infrastruktur des Mobilfunkproviders installiert werden. Jeder Netzwerkservice besteht dabei aus mehreren (virtuellen) Netzwerkfunktionen, die verschiedene Operationen auf empfangene Daten anwenden und diese dann zur Weiterverarbeitung an andere Netzwerkfunktionen weitergeben. Die genaue Reihenfolge, in der die Datenströme durch die einzelnen Netzwerkfunktionen geroutet werden, ist dabei nicht eindeutig vorgegeben. Anders als in Cloud-Szenarien ist die Struktur der einzubettenden virtuellen Netze also in Teilen flexibel, was zu interessanten neuen, theoretischen Aspekten bzgl. des Deployment-Problems führt. Der CoordVNF-Algorithmus ist als einer der ersten Ansätze in der Lage, solche flexiblen NFV-Netzwerkservices effizient innerhalb der Infrastruktur des Mobilfunkproviders zu platzieren. Im Gegensatz zu bisherigen Verfahren kann CoordVNF auch im Zusammenhang mit größeren Infrastrukturen verwendet werden. Abschließend wird das Deployment ausfallsicherer Netzwerkservices diskutiert. In diesem Kontext wird beschrieben, wie sich die Robustheit eingebetteter NFV-Services durch Reservierung zusätzlicher Backup-Ressourcen erhöhen lässt. Aufbauend auf CoordVNF wird dann ein Deploymentalgorithmus vorgestellt, der in der Lage ist, Einbettungen gegenüber Ausfällen abzusichern (SVNF, Survivable deployment of Virtual Network Functions).Network Function Virtualization (NFV) is being considered as an emerging key technology for future mobile network infrastructures. In classical networks, network functions are tightly bound to specific hardware boxes. In contrast, in NFV networks, (software) functionality is separated from hardware components. Highly specific hardware boxes are being replaced by commodity computing, networking, and storage equipment, offering resources for hosting and running more than just one specific type of network function. One of the key concepts of NFV is the integration of cloud computing technology into the network core: This enables virtual network functions to be installed and deployed where they are needed; additional resources can be dynamically provided in times of high demand, whereas virtual functions can also be consolidated on a smaller hardware setting if demand decreases. NFV enables operators to manage network functions in a much more flexible way, without having the need of instructing technicians to manually reconfigure hardware equipment on-site -- instead, network functions can be deployed and managed remotely. Additionally, for reliability reasons, virtual network functions can be easily migrated to backup resources in case of hardware or software failures. This thesis discusses the question on how those virtual network functions can be efficiently deployed within the physical network infrastructure. From a theoretical perspective, finding the optimal deployment of virtual network services (e.g., in terms of embedding cost) is known as a NP-hard optimization problem. In this context, the thesis introduces heuristic approaches for solving the deployment problem. To this end, first, an extensible simulation framework is discussed which enables researchers to thoroughly evaluate both existing and novel deployment algorithms. Second, a distributed algorithm (DPVNE) is presented for embedding virtual networks into a shared physical cloud infrastructure. Here, the main idea is to partition the physical network into several smaller, non-overlapping network regions. Embeddings in those network partitions can then be performed in parallel: Computational efforts can be spread to multiple distributed nodes. This way, solving this NP-hard optimization problem becomes feasible even in large-scale network scenarios where virtual network deployment requests arrive continuously. Third, a backtracking-based algorithm (CoordVNF) is presented in this thesis for the deployment of virtual network functions in NFV scenarios. In contrast to cloud scenarios, in NFV scenarios, the exact chaining of network functions is not always predefined: In fact, the same network service can be provided by several chainings of network functions. The first embedding algorithm presented here aims to deploy those flexible virtual network services in a cost- and time-efficient way, even in large-scale scenarios. Finally, the thesis discusses the deployment of resilient NFV services; in this context, an extension of the CoordVNF algorithm is presented that allocates additional backup resources for protecting network services from failures

    Information Technology Service Continuity Practices in Disadvantaged Business Enterprises

    Get PDF
    Disadvantaged business enterprises (DBEs) not using cloud solutions to ensure information technology (IT) service continuity may not withstand the impacts of IT disruption caused by human-made and natural disasters. The loss of critical IT resources leads to business closure and a resource loss for the community, employees, and families. Grounded in the technology acceptance model, the purpose of this qualitative multiple case study was to explore strategies IT leaders in DBEs use to implement cloud solutions to minimize IT disruption. Participants included 16 IT leaders in DBEs in the U.S. state of Maryland. Data were generated through semi-structured interviews and reviews of 10 organizational documents. Data were analyzed using inductive analysis, and three themes were identified: alignment with business requirements, sustaining business growth, and trust in cloud services. One recommendation is for IT leaders in DBEs to ensure cloud-based IT service continuity practices are built into all aspects of small business operation. The implications for positive social change include the potential for economic stability for families and environments that rely on the DBEs for continuing business and employment

    Accelerating orchestration with in-network offloading

    Get PDF
    The demand for low-latency Internet applications has pushed functionality that was originally placed in commodity hardware into the network. Either in the form of binaries for the programmable data plane or virtualised network functions, services are implemented within the network fabric with the aim of improving their performance and placing them close to the end user. Training of machine learning algorithms, aggregation of networking traffic, virtualised radio access components, are just some of the functions that have been deployed within the network. Therefore, as the network fabric becomes the accelerator for various applications, it is imperative that the orchestration of their components is also adapted to the constraints and capabilities of the deployment environment. This work identifies performance limitations of in-network compute use cases for both cloud and edge environments and makes suitable adaptations. Within cloud infrastructure, this thesis proposes a platform that relies on programmable switches to accelerate the performance of data replication. It then proceeds to discuss design adaptations of an orchestrator that will allow in-network data offloading and enable accelerated service deployment. At the edge, the topic of inefficient orchestration of virtualised network functions is explored, mainly with respect to energy usage and resource contention. An orchestrator is adapted to schedule requests by taking into account edge constraints in order to minimise resource contention and accelerate service processing times. With data transfers consuming valuable resources at the edge, an efficient data representation mechanism is implemented to provide statistical insight on the provenance of data at the edge and enable smart query allocation to nodes with relevant data. Taking into account the previous state of the art, the proposed data plane replication method appears to be the most computationally efficient and scalable in-network data replication platform available, with significant improvements in throughput and up to an order of magnitude decrease in latency. The orchestrator of virtual network functions at the edge was shown to reduce event rejections, total processing time, and energy consumption imbalances over the default orchestrator, thus proving more efficient use of the infrastructure. Lastly, computational cost at the edge was further reduced with the use of the proposed query allocation mechanism which minimised redundant engagement of nodes
    corecore