401 research outputs found

    Online Load Balancing for Network Functions Virtualization

    Full text link
    Network Functions Virtualization (NFV) aims to support service providers to deploy various services in a more agile and cost-effective way. However, the softwarization and cloudification of network functions can result in severe congestion and low network performance. In this paper, we propose a solution to address this issue. We analyze and solve the online load balancing problem using multipath routing in NFV to optimize network performance in response to the dynamic changes of user demands. In particular, we first formulate the optimization problem of load balancing as a mixed integer linear program for achieving the optimal solution. We then develop the ORBIT algorithm that solves the online load balancing problem. The performance guarantee of ORBIT is analytically proved in comparison with the optimal offline solution. The experiment results on real-world datasets show that ORBIT performs very well for distributing traffic of each service demand across multipaths without knowledge of future demands, especially under high-load conditions

    Allocation des ressources dans les environnements informatiques en périphérie des réseaux mobiles

    Get PDF
    Abstract: The evolution of information technology is increasing the diversity of connected devices and leading to the expansion of new application areas. These applications require ultra-low latency, which cannot be achieved by legacy cloud infrastructures given their distance from users. By placing resources closer to users, the recently developed edge computing paradigm aims to meet the needs of these applications. Edge computing is inspired by cloud computing and extends it to the edge of the network, in proximity to where the data is generated. This paradigm leverages the proximity between the processing infrastructure and the users to ensure ultra-low latency and high data throughput. The aim of this thesis is to improve resource allocation at the network edge to provide an improved quality of service and experience for low-latency applications. For better resource allocation, it is necessary to have reliable knowledge about the resources available at any moment. The first contribution of this thesis is to propose a resource representation to allow the supervisory xentity to acquire information about the resources available to each device. This information is then used by the resource allocation scheme to allocate resources appropriately for the different services. The resource allocation scheme is based on Lyapunov optimization, and it is executed only when resource allocation is required, which reduces the latency and resource consumption on each edge device. The second contribution of this thesis focuses on resource allocation for edge services. The services are created by chaining a set of virtual network functions. Resource allocation for services consists of finding an adequate placement for, routing, and scheduling these virtual network functions. We propose a solution based on game theory and machine learning to find a suitable location and routing for as well as an appropriate scheduling of these functions at the network edge. Finding the location and routing of network functions is formulated as a mean field game solved by iterative Ishikawa-Mann learning. In addition, the scheduling of the network functions on the different edge nodes is formulated as a matching set, which is solved using an improved version of the deferred acceleration algorithm we propose. The third contribution of this thesis is the resource allocation for vehicular services at the edge of the network. In this contribution, the services are migrated and moved to the different infrastructures at the edge to ensure service continuity. Vehicular services are particularly delay sensitive and related mainly to road safety and security. Therefore, the migration of vehicular services is a complex operation. We propose an approach based on deep reinforcement learning to proactively migrate the different services while ensuring their continuity under high mobility constraints.L'évolution des technologies de l'information entraîne la prolifération des dispositifs connectés qui mène à l'exploration de nouveaux champs d'application. Ces applications demandent une latence ultra-faible, qui ne peut être atteinte par les infrastructures en nuage traditionnelles étant donné la distance qui les sépare des utilisateurs. En rapprochant les ressources aux utilisateurs, le paradigme de l'informatique en périphérie, récemment apparu, vise à répondre aux besoins de ces applications. L’informatique en périphérie s'inspire de l’informatique en nuage, en l'étendant à la périphérie du réseau, à proximité de l'endroit où les données sont générées. Ce paradigme tire parti de la proximité entre l'infrastructure de traitement et les utilisateurs pour garantir une latence ultra-faible et un débit élevé des données. L'objectif de cette thèse est l'amélioration de l'allocation des ressources à la périphérie du réseau pour offrir une meilleure qualité de service et expérience pour les applications à faible latence. Pour une meilleure allocation des ressources, il est nécessaire d'avoir une bonne connaissance sur les ressources disponibles à tout moment. La première contribution de cette thèse consiste en la proposition d'une représentation des ressources pour permettre à l'entité de supervision d'acquérir des informations sur les ressources disponibles à chaque dispositif. Ces informations sont ensuite exploitées par le schéma d'allocation des ressources afin d'allouer les ressources de manière appropriée pour les différents services. Le schéma d'allocation des ressources est basé sur l'optimisation de Lyapunov, et il n'est exécuté que lorsque l'allocation des ressources est requise, ce qui réduit la latence et la consommation en ressources sur chaque équipement de périphérie. La deuxième contribution de cette thèse porte sur l'allocation des ressources pour les services en périphérie. Les services sont composés par le chaînage d'un ensemble de fonctions réseau virtuelles. L'allocation des ressources pour les services consiste en la recherche d'un placement, d'un routage et d'un ordonnancement adéquat de ces fonctions réseau virtuelles. Nous proposons une solution basée sur la théorie des jeux et sur l'apprentissage automatique pour trouver un emplacement et routage convenable ainsi qu'un ordonnancement approprié de ces fonctions en périphérie du réseau. La troisième contribution de cette thèse consiste en l'allocation des ressources pour les services véhiculaires en périphérie du réseau. Dans cette contribution, les services sont migrés et déplacés sur les différentes infrastructures en périphérie pour assurer la continuité des services. Les services véhiculaires sont en particulier sensibles à la latence et liés principalement à la sûreté et à la sécurité routière. En conséquence, la migration des services véhiculaires constitue une opération complexe. Nous proposons une approche basée sur l'apprentissage par renforcement profond pour migrer de manière proactive les différents services tout en assurant leur continuité sous les contraintes de mobilité élevée

    Clustering algorithms for dynamic adaptation of service function chains

    Get PDF
    Network function virtualization is a pillar-stone of today’s network architectures as it offers better management and elasticity and allows also a flexible maintenance of services running on shared resources over cloud environments. Network functions traditionally hosted on dedicated hardware are now provided over software based components that might run either on virtual machines or on containers. The major advantage of this transition is that it makes the deployment of new services easier while optimizing the management and administration of network architectures. It is much easier to spin up a new virtual machine/container hosting a network function or a specific application described as a service function chain, than to deploy a new hardware based equipment and checking its compatibility with the rest of the architecture. With all the advantages that this new paradigm offers comes a set of challenges related mainly to: 1) optimizing the resource consumption on the shared infrastructure 2) making the best decision of placing the virtual functions that respects at the same time clients’ requirements and also leverages the available resources on the substrate network in terms of different metrics (e.g., CPU, memory, latency, bandwidth). This aspect of Network Function Virtualization-NFV and Service Function Chains-SFC placement have been treated in so many research works that propose approaches ensuring optimal placement and chaining of VNFs in virtualized networks, but as the adoption of these technologies gets more important in real network setups, and given the strict restrictions of today’s’ applications (e.g. latency highly-sensitive applications, or availability highly-sensitive service, etc.), it is always important to consider all the parameters impacting the network management in cloud environments. In this research project, we develop new approaches for placement and chaining of virtual network functions in cloud-based environments. The first approach allows forming on demand clusters of servers deployed in a physical infrastructure. These servers are grouped according to their similar attributes (e.g., CPU-intensive server, energy-efficient server, etc). This process is a proactive measure to ensure that SFCs are hosted in servers that meet their specific metrics requirements (CPU, memory, disk, etc.). It employs a meta-heuristic called CRO (Chemical Reaction Optimization) to decide of the best VNF placement guaranteeing optimal resource consumption in terms of CPU / memory. We employ CRO also to ensure the lowest latencies during the routing between the different VNFs. In fact, the E2E delay is an important aspect to consider, as most current applications require low latencies and shortest run times. In the second approach, the clusters are formed using algorithms based on meta-heuristics, including the CRO, allowing to improve the quality of clusters formed in terms of similarity, density and modularity

    A Reliability-Aware Approach for Resource Efficient Virtual Network Function Deployment

    Get PDF
    OAPA Network function virtualization (NFV) is a promising technique aimed at reducing capital expenditures (CAPEX) and operating expenditures (OPEX), and improving the flexibility and scalability of an entire network. In contrast to traditional dispatching, NFV can separate network functions from proprietary infrastructure and gather these functions into a resource pool that can efficiently modify and adjust service function chains (SFCs). However, this emerging technique has some challenges. A major problem is reliability, which involves ensuring the availability of deployed SFCs, namely, the probability of successfully chaining a series of virtual network functions (VNFs) while considering both the feasibility and the specific requirements of clients, because the substrate network remains vulnerable to earthquakes, floods and other natural disasters. Based on the premise of users & #x2019; demands for SFC requirements, we present an Ensure Reliability Cost Saving (ER & #x005F;CS) algorithm to reduce the CAPEX and OPEX of telecommunication service providers (TSPs) by reducing the reliability of the SFC deployments. The results of extensive experiments indicate that the proposed algorithms perform efficiently in terms of the blocking ratio, resource consumption, time consumption and the first block

    DQN-based intelligent controller for multiple edge domains

    Get PDF
    Advanced technologies like network function virtualization (NFV) and multi-access edge computing (MEC) have been used to build flexible, highly programmable, and autonomously manageable infrastructures close to the end-users, at the edge of the network. In this vein, the use of single-board computers (SBCs) in commodity clusters has gained attention to deploy virtual network functions (VNFs) due to their low cost, low energy consumption, and easy programmability. This paper deals with the problem of deploying VNFs in a multi-cluster system formed by this kind of node which is characterized by limited computational and battery capacities. Additionally, existing platforms to orchestrate and manage VNFs do not consider energy levels during their placement decisions, and therefore, they are not optimized for energy-constrained environments. In this regard, this study proposes an intelligent controller as a global allocation mechanism based on deep reinforcement learning (DRL), specifically on deep Q-network (DQN). The conceived mechanism optimizes energy consumption in SBCs by selecting the most suitable nodes across several clusters to deploy event requests in terms of nodes’ resources and events’ demands. A comparison with available allocation algorithms revealed that our solution required 28% fewer resource costs and reduced 35% the energy consumption in the clusters’ computing nodes while maintaining high levels of acceptance ratio.This work has been supported in part (50%) by the Agencia Estatal de Investigación of Ministerio de Ciencia e Innovación of Spain under projects PID2019-108713RB-C51 & PID2019-108713RB-C52 MCIN/ AEI/10.13039/501100011033; and in part (50%) by AI@EDGE H2020-ICT-52-2020 under grant agreement No. 10101592
    • …
    corecore