128 research outputs found

    Resource Allocation in SDN/NFV-Enabled Core Networks

    Get PDF
    For next generation core networks, it is anticipated to integrate communication, storage and computing resources into one unified, programmable and flexible infrastructure. Software-defined networking (SDN) and network function virtualization (NFV) become two enablers. SDN decouples the network control and forwarding functions, which facilitates network management and enables network programmability. NFV allows the network functions to be virtualized and placed on high capacity servers located anywhere in the network, not only on dedicated devices in current networks. Driven by SDN and NFV platforms, the future network architecture is expected to feature centralized network management, virtualized function chaining, reduced capital and operational costs, and enhanced service quality. The combination of SDN and NFV provides a potential technical route to promote the future communication networks. It is imperative to efficiently manage, allocate and optimize the heterogeneous resources, including computing, storage, and communication resources, to the customized services to achieve better quality-of-service (QoS) provisioning. This thesis makes some in-depth researches on efficient resource allocation for SDN/NFV-enabled core networks in multiple aspects and dimensionality. Typically, the resource allocation task is implemented in three aspects. Given the traffic metrics, QoS requirements, and resource constraints of the substrate network, we first need to compose a virtual network function (VNF) chain to form a virtual network (VN) topology. Then, virtual resources allocated to each VNF or virtual link need to be optimized in order to minimize the provisioning cost while satisfying the QoS requirements. Next, we need to embed the virtual network (i.e., VNF chain) onto the substrate network, in which we need to assign the physical resources in an economical way to meet the resource demands of VNFs and links. This involves determining the locations of NFV nodes to host the VNFs and the routing from source to destination. Finally, we need to schedule the VNFs for multiple services to minimize the service completion time and maximize the network performance. In this thesis, we study resource allocation in SDN/NFV-enabled core networks from the aforementioned three aspects. First, we jointly study how to design the topology of a VN and embed the resultant VN onto a substrate network with the objective of minimizing the embedding cost while satisfying the QoS requirements. In VN topology design, optimizing the resource requirement for each virtual node and link is necessary. Without topology optimization, the resources assigned to the virtual network may be insufficient or redundant, leading to degraded service quality or increased embedding cost. The joint problem is formulated as a Mixed Integer Nonlinear Programming (MINLP), where queueing theory is utilized as the methodology to analyze the network delay and help to define the optimal set of physical resource requirements at network elements. Two algorithms are proposed to obtain the optimal/near-optimal solutions of the MINLP model. Second, we address the multi-SFC embedding problem by a game theoretical approach, considering the heterogeneity of NFV nodes, the effect of processing-resource sharing among various VNFs, and the capacity constraints of NFV nodes. In the proposed resource constrained multi-SFC embedding game (RC-MSEG), each SFC is treated as a player whose objective is to minimize the overall latency experienced by the supported service flow, while satisfying the capacity constraints of all its NFV nodes. Due to processing-resource sharing, additional delay is incurred and integrated into the overall latency for each SFC. The capacity constraints of NFV nodes are considered by adding a penalty term into the cost function of each player, and are guaranteed by a prioritized admission control mechanism. We first prove that the proposed game RC-MSEG is an exact potential game admitting at least one pure Nash Equilibrium (NE) and has the finite improvement property (FIP). Then, we design two iterative algorithms, namely, the best response (BR) algorithm with fast convergence and the spatial adaptive play (SAP) algorithm with great potential to obtain the best NE of the proposed game. Third, the VNF scheduling problem is investigated to minimize the makespan (i.e., overall completion time) of all services, while satisfying their different end-to-end (E2E) delay requirements. The problem is formulated as a mixed integer linear program (MILP) which is NP-hard with exponentially increasing computational complexity as the network size expands. To solve the MILP with high efficiency and accuracy, the original problem is reformulated as a Markov decision process (MDP) problem with variable action set. Then, a reinforcement learning (RL) algorithm is developed to learn the best scheduling policy by continuously interacting with the network environment. The proposed learning algorithm determines the variable action set at each decision-making state and accommodates different execution time of the actions. The reward function in the proposed algorithm is carefully designed to realize delay-aware VNF scheduling. To sum up, it is of great importance to integrate SDN and NFV in the same network to accelerate the evolution toward software-enabled network services. We have studied VN topology design, multi-VNF chain embedding, and delay-aware VNF scheduling to achieve efficient resource allocation in different dimensions. The proposed approaches pave the way for exploiting network slicing to improve resource utilization and facilitate QoS-guaranteed service provisioning in SDN/NFV-enabled networks

    Machine Learning Algorithms for Provisioning Cloud/Edge Applications

    Get PDF
    Mención Internacional en el título de doctorReinforcement Learning (RL), in which an agent is trained to make the most favourable decisions in the long run, is an established technique in artificial intelligence. Its popularity has increased in the recent past, largely due to the development of deep neural networks spawning deep reinforcement learning algorithms such as Deep Q-Learning. The latter have been used to solve previously insurmountable problems, such as playing the famed game of “Go” that previous algorithms could not. Many such problems suffer the curse of dimensionality, in which the sheer number of possible states is so overwhelming that it is impractical to explore every possible option. While these recent techniques have been successful, they may not be strictly necessary or practical for some applications such as cloud provisioning. In these situations, the action space is not as vast and workload data required to train such systems is not as widely shared, as it is considered commercialy sensitive by the Application Service Provider (ASP). Given that provisioning decisions evolve over time in sympathy to incident workloads, they fit into the sequential decision process problem that legacy RL was designed to solve. However because of the high correlation of time series data, states are not independent of each other and the legacy Markov Decision Processes (MDPs) have to be cleverly adapted to create robust provisioning algorithms. As the first contribution of this thesis, we exploit the knowledge of both the application and configuration to create an adaptive provisioning system leveraging stationary Markov distributions. We then develop algorithms that, with neither application nor configuration knowledge, solve the underlying Markov Decision Process (MDP) to create provisioning systems. Our Q-Learning algorithms factor in the correlation between states and the consequent transitions between them to create provisioning systems that do not only adapt to workloads, but can also exploit similarities between them, thereby reducing the retraining overhead. Our algorithms also exhibit convergence in fewer learning steps given that we restructure the state and action spaces to avoid the curse of dimensionality without the need for the function approximation approach taken by deep Q-Learning systems. A crucial use-case of future networks will be the support of low-latency applications involving highly mobile users. With these in mind, the European Telecommunications Standards Institute (ETSI) has proposed the Multi-access Edge Computing (MEC) architecture, in which computing capabilities can be located close to the network edge, where the data is generated. Provisioning for such applications therefore entails migrating them to the most suitable location on the network edge as the users move. In this thesis, we also tackle this type of provisioning by considering vehicle platooning or Cooperative Adaptive Cruise Control (CACC) on the edge. We show that our Q-Learning algorithm can be adapted to minimize the number of migrations required to effectively run such an application on MEC hosts, which may also be subject to traffic from other competing applications.This work has been supported by IMDEA Networks InstitutePrograma de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Antonio Fernández Anta.- Secretario: Diego Perino.- Vocal: Ilenia Tinnirell

    Machine Learning-based Orchestration Solutions for Future Slicing-Enabled Mobile Networks

    Get PDF
    The fifth generation mobile networks (5G) will incorporate novel technologies such as network programmability and virtualization enabled by Software-Defined Networking (SDN) and Network Function Virtualization (NFV) paradigms, which have recently attracted major interest from both academic and industrial stakeholders. Building on these concepts, Network Slicing raised as the main driver of a novel business model where mobile operators may open, i.e., “slice”, their infrastructure to new business players and offer independent, isolated and self-contained sets of network functions and physical/virtual resources tailored to specific services requirements. While Network Slicing has the potential to increase the revenue sources of service providers, it involves a number of technical challenges that must be carefully addressed. End-to-end (E2E) network slices encompass time and spectrum resources in the radio access network (RAN), transport resources on the fronthauling/backhauling links, and computing and storage resources at core and edge data centers. Additionally, the vertical service requirements’ heterogeneity (e.g., high throughput, low latency, high reliability) exacerbates the need for novel orchestration solutions able to manage end-to-end network slice resources across different domains, while satisfying stringent service level agreements and specific traffic requirements. An end-to-end network slicing orchestration solution shall i) admit network slice requests such that the overall system revenues are maximized, ii) provide the required resources across different network domains to fulfill the Service Level Agreements (SLAs) iii) dynamically adapt the resource allocation based on the real-time traffic load, endusers’ mobility and instantaneous wireless channel statistics. Certainly, a mobile network represents a fast-changing scenario characterized by complex spatio-temporal relationship connecting end-users’ traffic demand with social activities and economy. Legacy models that aim at providing dynamic resource allocation based on traditional traffic demand forecasting techniques fail to capture these important aspects. To close this gap, machine learning-aided solutions are quickly arising as promising technologies to sustain, in a scalable manner, the set of operations required by the network slicing context. How to implement such resource allocation schemes among slices, while trying to make the most efficient use of the networking resources composing the mobile infrastructure, are key problems underlying the network slicing paradigm, which will be addressed in this thesis
    • …
    corecore