6 research outputs found

    Efficient Resource Allocation for Throughput Maximization in Next-Generation Networks

    Get PDF
    Software-Defined Networking (SDN) and Network Function Virtualization (NFV) have emerged as the foundation of the next-generation network architecture by introducing great flexibility and network automation capabilities, including automatic response to faults and load changes and programmatic provision of network resources and connections. It has been envisioned that the SDN- and NFV-based next-generation network architecture will play a critical role in providing network services to users, where the desired network services, including data transfer and policy enforcement, are fulfilled by allocating network resources using virtualization technologies. However, the disparity between ever-growing user demands and scarce network resources makes resource allocation exceptionally central to the performance of a network service, because only by effectively allocating these scarce resources can a network service provider satisfy users and maximize the gain from running the service. In this thesis, we study efficient resource allocation for network throughput maximization in next-generation networks, while meeting user resource demands and Quality of Service (QoS) requirements, subject to network resource capacities. This however poses great challenges, namely, (1) how to maximize network throughput, considering that both SDN-enabled switches and links are capacitated, (2) how to maximize the network throughput while taking into account network function and QoS requirements of users, (3) how to dynamically scale and readjust resource allocation for user requests, and (4) how to provision a network service that can satisfy user reliability requirements. To address these challenges, we provide a thorough study of network throughput maximization problems in the context of the next-generation network architecture, by formulating the problems as optimizations problems and developing novel optimization frameworks and algorithms for the problems. Specifically, this thesis makes the following contributions. Firstly, we consider dynamic user request admissions where user requests arrive one by one and the knowledge of future request arrivals is not given as a priori. We develop a novel cost model that accurately captures the usage costs of network resources and propose online algorithms with provable performance guarantees. Secondly, we study the problem of realizing user requests with network function requirements, with the objective of maximizing network throughput, while meeting user QoS requirements, subject to resource capacity constraints. For this problem, we develop two algorithms that strive for the trade-off between the accuracy/quality of a solution and the running time of obtaining the solution. Thirdly, we investigate maximization of network throughput by dynamically scaling network resources while minimizing the overall operational cost of a network. We propose a unified framework for two types of resource scaling {--} vertical scaling and horizontal scaling. Through non-trivial reductions of the problem of concern into several classic problems, we propose an algorithm that has been empirically demonstrated to deliver near-optimal solutions. Fourthly, we deal with the problem of reliability-aware provisioning of network resources for users, with the aim of maximizing network throughput. We devise an approximation algorithm with a logarithmic approximation ratio for the general case of this problem. We also develop constant-factor approximation and exact algorithm for two special cases of the problem, respectively. The formulated problem is a generalization of several classic optimization problems. Finally, in addition to extensive theoretical analyses, we also evaluate the performance of proposed algorithms empirically through experimental simulations based on real and synthetic datasets. Experimental results show that the proposed algorithms significantly outperform existing algorithms

    Efficient Virtualized Network Service Provisioning in Mobile Edge Computing

    Get PDF
    There is a substantial growth in the usage of mobile devices. These devices, including smartphones, sensors, and wearables, are limited by their computational and energy capacities, due to their portable size. Mobile edge computing (MEC), which provides cloud resources at the edge of mobile network in close proximity to mobile users, is a promising technology to reduce response delays, ensure network operation efficiency, and improve user service satisfaction. Mobile edge computing is a promising technology to leverage the capability of mobile devices to offload tasks to nearby edge-clouds (cloudlets) for processing. Furthermore, Network Function Virtualization (NFV) is another promising technique that implements various network functions for many applications as pieces of software in servers or cloudlets in MEC networks. The provisioning of virtualized network services in MEC can improve user service experiences, simplify network service deployment, and ease network resource management. In this thesis, we will focus on the efficient virtualized network service provisioning in MEC networks by judicious resource allocations and request admissions to maximize network throughput and minimize request admission cost in different application scenarios. We firstly address dynamic request admissions with service function chain requirements in MEC with the objective to maximize the profit collected by the network service provider, assuming that the cloudlets are located at different geographical locations and electricity prices at different locations are different. We formulate an integer linear programming (ILP) solution to the offline problem, and devise an online algorithm with a provable competitive ratio for the online problem when requests arrive one by one without the knowledge of future request arrivals. We then study NFV-enabled multicasting that is a fundamental routing problem in an MEC network, subject to resource capacities on both its cloudlets and links. We devise an admission framework for single NFV-enabled multicast request admission with the aim to minimize request admission cost. We then develop an efficient algorithm for the throughput maximization problem for the admissions of a given set of NFV-enabled multicast requests. We also devise an online algorithm with a provable competitive ratio for the online NFV-enabled multicast request admissions. We thirdly investigate virtualized network function service provisioning for mobile users in MEC that takes into account user mobility and service delay requirements. We formulate two novel optimization problems of user service request admissions with the aims to maximize the accumulative network utility and accumulative network throughput for a given time horizon, respectively, where network utility is a submodular function that can be used to tradeoff between individual user service satisfaction and accumulative network throughput. We then devise a constant approximation algorithm for the utility maximization problem. We also develop an online algorithm for the accumulative throughput maximization problem. We fourthly explore a non-trivial tradeoff between different types of resources in NFV-enabled request scheduling in MEC with an objective to minimize request admission cost, through introducing a novel concept - load factor. We formulate the cost minimization problem that admits all requests by assuming that there is sufficient computing resource in MEC to accommodate the requested VNF instances of all requests, for which we formulate an ILP solution and two efficient heuristic algorithms. We also deal with the problem under the computing resource constraint, for which we formulate an ILP solution when the problem size is small; otherwise, we devise efficient algorithms for it. We finally summarize the thesis and explore several potential research topics that are based on the work in this thesis

    Virtual Service Provisioning for Internet of Things Applications in Mobile Edge Computing

    Get PDF
    The Internet of Things (IoT) paradigm is paving the way for many new emerging technologies, such as smart grid, industry 4.0, connected cars, smart cities, etc. Mobile Edge Computing (MEC) provides promising solutions to reduce service delays for delay-sensitive IoT applications, where cloudlets are co-located with wireless access points in the proximity of IoT devices. Most mobile users have specified Service Function Chain (SFC) requirements, where an SFC is a sequence of Virtual Network Functions (VNFs). Meanwhile, edge intelligence arises to provision real-time deep neural network (DNN) inference services for users. To accelerate the processing of the DNN inference of a request in an MEC network, the DNN inference model usually can be partitioned into two connected parts: one part is processed on the local IoT device of the request; and the other part is processed on a cloudlet (server) in the MEC network. Also, the DNN inference can be further accelerated by allocating multiple threads of the cloudlet in which the request is assigned. In this thesis, we will focus on virtual service provisioning for IoT applications in MEC Environments. Firstly, we consider the user satisfaction problem of using services jointly provided by an MEC network and a remote cloud for delay-sensitive IoT applications, through maximizing the accumulative user satisfaction when different user services have different service delay requirements. A novel metric to measure user satisfaction of using a service is proposed, and efficient approximation and online algorithms for the defined problems under both static and dynamic user service demands are then devised and analyzed. Secondly, we study service provisioning in an MEC network for multi-source IoT applications with SFC requirements with the aim of minimizing service provisioning cost, where each IoT application has multiple data streams from different sources to be uploaded to the MEC network for processing and storage, while each data stream must pass through the network functions of the SFC of the IoT application, prior to reaching its destination. A service provisioning framework for such multi-source IoT applications is proposed, through uploading stream data from multiple IoT sources, VNF instance placement and sharing, in-network aggregation of data streams, and workload balancing among cloudlets. Efficient algorithms for service provisioning of multi-source IoT applications in MEC networks, built upon the proposed framework, are also proposed. Thirdly, we investigate a novel DNN inference throughput maximization problem in an MEC network with the aim to maximize the number of delay-aware DNN service requests admitted, by accelerating each DNN inference through jointly exploring DNN partitioning and inference parallelism. We devise a constant approximation algorithm for the problem under the offline setting, and an online algorithm with a provable competitive ratio for the problem under the online setting, respectively. Fourthly, we address a robust SFC placement problem with the aim to maximize the expected profit collected by the service provider of an MEC network, under the assumption of both computing resource and data rate demand uncertainties. We start with a special case of the problem where the measurement of the expected demanded resources for each request admission is accurate, under which we propose a near-optimal approximation algorithm for the problem by adopting the Markov approximation technique, which can achieve a provable optimality gap. Then, we extend the proposed approach to the problem of concern, for which we show that the proposed algorithm still is applicable, and the solution delivered has a moderate optimality gap with bounded perturbation errors on the profit measurement. Finally, we summarize the thesis work and explore several potential research topics that are based on the studies in this thesis

    Efficient sharing mechanisms for virtualized multi-tenant heterogeneous networks

    Get PDF
    The explosion in data traffic, the physical resource constraints, and the insufficient financial incentives for deploying 5G networks, stress the need for a paradigm shift in network upgrades. Typically, operators are also the service providers, which charge the end users with low and flat tariffs, independently of the service enjoyed. A fine-scale management of the network resources is needed, both for optimizing costs and resource utilization, as well as for enabling new synergies among network owners and third-parties. In particular, operators could open their networks to third parties by means of fine-scale sharing agreements over customized networks for enhanced service provision, in exchange for an adequate return of investment for upgrading their infrastructures. The main objective of this thesis is to study the potential of fine-scale resource management and sharing mechanisms for enhancing service provision and for contributing to a sustainable road to 5G. More precisely, the state-of-the-art architectures and technologies for network programmability and scalability are studied, together with a novel paradigm for supporting service diversity and fine-scale sharing. We review the limits of conventional networks, we extend existing standardization efforts and define an enhanced architecture for enabling 5G networks' features (e.g., network-wide centralization and programmability). The potential of the proposed architecture is assessed in terms of flexible sharing and enhanced service provision, while the advantages of alternative business models are studied in terms of additional profits to the operators. We first study the data rate improvement achievable by means of spectrum and infrastructure sharing among operators and evaluate the profit increase justified by a better service provided. We present a scheme based on coalitional game theory for assessing the capability of accommodating more service requests when a cooperative approach is adopted, and for studying the conditions for beneficial sharing among coalitions of operators. Results show that: i) collaboration can be beneficial also in case of unbalanced cost redistribution within coalitions; ii) coalitions of equal-sized operators provide better profit opportunities and require lower tariffs. The second kind of sharing interaction that we consider is the one between operators and third-party service providers, in the form of fine-scale provision of customized portions of the network resources. We define a policy-based admission control mechanism, whose performance is compared with reference strategies. The proposed mechanism is based on auction theory and computes the optimal admission policy at a reduced complexity for different traffic loads and allocation frequencies. Because next-generation services include delay-critical services, we compare the admission control performances of conventional approaches with the proposed one, which proves to offer near real-time service provision and reduced complexity. Besides, it guarantees high revenues and low expenditures in exchange for negligible losses in terms of fairness towards service providers. To conclude, we study the case where adaptable timescales are adopted for the policy-based admission control, in order to promptly guarantee service requirements over traffic fluctuations. In order to reduce complexity, we consider the offline pre­computation of admission strategies with respect to reference network conditions, then we study the extension to unexplored conditions by means of computationally efficient methodologies. Performance is compared for different admission strategies by means of a proof of concept on real network traces. Results show that the proposed strategy provides a tradeoff in complexity and performance with respect to reference strategies, while reducing resource utilization and requirements on network awareness.La explosion del trafico de datos, los recursos limitados y la falta de incentivos para el desarrollo de 5G evidencian la necesidad de un cambio de paradigma en la gestion de las redes actuales. Los operadores de red suelen ser tambien proveedores de servicios, cobrando tarifas bajas y planas, independientemente del servicio ofrecido. Se necesita una gestion de recursos precisa para optimizar su utilizacion, y para permitir nuevas sinergias entre operadores y proveedores de servicios. Concretamente, los operadores podrian abrir sus redes a terceros compartiendolas de forma flexible y personalizada para mejorar la calidad de servicio a cambio de aumentar sus ganancias como incentivo para mejorar sus infraestructuras. El objetivo principal de esta tesis es estudiar el potencial de los mecanismos de gestion y comparticion de recursos a pequei\a escala para trazar un camino sostenible hacia el 5G. En concreto, se estudian las arquitecturas y tecnolog fas mas avanzadas de "programabilidad" y escalabilidad de las redes, junto a un nuevo paradigma para la diversificacion de servicios y la comparticion de recursos. Revisamos los limites de las redes convencionales, ampliamos los esfuerzos de estandarizacion existentes y definimos una arquitectura para habilitar la centralizacion y la programabilidad en toda la red. La arquitectura propuesta se evalua en terminos de flexibilidad en la comparticion de recursos, y de mejora en la prestacion de servicios, mientras que las ventajas de un modelo de negocio alternativo se estudian en terminos de ganancia para los operadores. En primer lugar, estudiamos el aumento en la tasa de datos gracias a un uso compartido del espectro y de las infraestructuras, y evaluamos la mejora en las ganancias de los operadores. Presentamos un esquema de admision basado en la teoria de juegos para acomodar mas solicitudes de servicio cuando se adopta un enfoque cooperativo, y para estudiar las condiciones para que la reparticion de recursos sea conveniente entre coaliciones de operadores. Los resultados ensei\an que: i) la colaboracion puede ser favorable tambien en caso de una redistribucion desigual de los costes en cada coalicion; ii) las coaliciones de operadores de igual tamai\o ofrecen mejores ganancias y requieren tarifas mas bajas. El segundo tipo de comparticion que consideramos se da entre operadores de red y proveedores de servicios, en forma de provision de recursos personalizada ya pequei\a escala. Definimos un mecanismo de control de trafico basado en polfticas de admision, cuyo rendimiento se compara con estrategias de referencia. El mecanismo propuesto se basa en la teoria de subastas y calcula la politica de admision optima con una complejidad reducida para diferentes cargas de trafico y tasa de asignacion. Con particular atencion a servicios 5G de baja latencia, comparamos las prestaciones de estrategias convencionales para el control de admision con las del metodo propuesto, que proporciona: i) un suministro de servicios casi en tiempo real; ii) una complejidad reducida; iii) unos ingresos elevados; y iv) unos gastos reducidos, a cambio de unas perdidas insignificantes en terminos de imparcialidad hacia los proveedores de servicios. Para concluir, estudiamos el caso en el que se adoptan escalas de tiempo adaptables para el control de admision, con el fin de garantizar puntualmente los requisitos de servicio bajo diferentes condiciones de trafico. Para reducir la complejidad, consideramos el calculo previo de las estrategias de admision con respecto a condiciones de red de referenda, adaptables a condiciones inexploradas por medio de metodologias computacionalmente eficientes. Se compara el rendimiento de diferentes estrategias de admision sobre trazas de trafico real. Los resultados muestran que la estrategia propuesta equilibra complejidad y ganancias, mientras se reduce la utilizacion de recursos y la necesidad de conocer el estado exacto de la red.Postprint (published version

    Allocation des ressources dans les environnements informatiques en périphérie des réseaux mobiles

    Get PDF
    Abstract: The evolution of information technology is increasing the diversity of connected devices and leading to the expansion of new application areas. These applications require ultra-low latency, which cannot be achieved by legacy cloud infrastructures given their distance from users. By placing resources closer to users, the recently developed edge computing paradigm aims to meet the needs of these applications. Edge computing is inspired by cloud computing and extends it to the edge of the network, in proximity to where the data is generated. This paradigm leverages the proximity between the processing infrastructure and the users to ensure ultra-low latency and high data throughput. The aim of this thesis is to improve resource allocation at the network edge to provide an improved quality of service and experience for low-latency applications. For better resource allocation, it is necessary to have reliable knowledge about the resources available at any moment. The first contribution of this thesis is to propose a resource representation to allow the supervisory xentity to acquire information about the resources available to each device. This information is then used by the resource allocation scheme to allocate resources appropriately for the different services. The resource allocation scheme is based on Lyapunov optimization, and it is executed only when resource allocation is required, which reduces the latency and resource consumption on each edge device. The second contribution of this thesis focuses on resource allocation for edge services. The services are created by chaining a set of virtual network functions. Resource allocation for services consists of finding an adequate placement for, routing, and scheduling these virtual network functions. We propose a solution based on game theory and machine learning to find a suitable location and routing for as well as an appropriate scheduling of these functions at the network edge. Finding the location and routing of network functions is formulated as a mean field game solved by iterative Ishikawa-Mann learning. In addition, the scheduling of the network functions on the different edge nodes is formulated as a matching set, which is solved using an improved version of the deferred acceleration algorithm we propose. The third contribution of this thesis is the resource allocation for vehicular services at the edge of the network. In this contribution, the services are migrated and moved to the different infrastructures at the edge to ensure service continuity. Vehicular services are particularly delay sensitive and related mainly to road safety and security. Therefore, the migration of vehicular services is a complex operation. We propose an approach based on deep reinforcement learning to proactively migrate the different services while ensuring their continuity under high mobility constraints.L'évolution des technologies de l'information entraîne la prolifération des dispositifs connectés qui mène à l'exploration de nouveaux champs d'application. Ces applications demandent une latence ultra-faible, qui ne peut être atteinte par les infrastructures en nuage traditionnelles étant donné la distance qui les sépare des utilisateurs. En rapprochant les ressources aux utilisateurs, le paradigme de l'informatique en périphérie, récemment apparu, vise à répondre aux besoins de ces applications. L’informatique en périphérie s'inspire de l’informatique en nuage, en l'étendant à la périphérie du réseau, à proximité de l'endroit où les données sont générées. Ce paradigme tire parti de la proximité entre l'infrastructure de traitement et les utilisateurs pour garantir une latence ultra-faible et un débit élevé des données. L'objectif de cette thèse est l'amélioration de l'allocation des ressources à la périphérie du réseau pour offrir une meilleure qualité de service et expérience pour les applications à faible latence. Pour une meilleure allocation des ressources, il est nécessaire d'avoir une bonne connaissance sur les ressources disponibles à tout moment. La première contribution de cette thèse consiste en la proposition d'une représentation des ressources pour permettre à l'entité de supervision d'acquérir des informations sur les ressources disponibles à chaque dispositif. Ces informations sont ensuite exploitées par le schéma d'allocation des ressources afin d'allouer les ressources de manière appropriée pour les différents services. Le schéma d'allocation des ressources est basé sur l'optimisation de Lyapunov, et il n'est exécuté que lorsque l'allocation des ressources est requise, ce qui réduit la latence et la consommation en ressources sur chaque équipement de périphérie. La deuxième contribution de cette thèse porte sur l'allocation des ressources pour les services en périphérie. Les services sont composés par le chaînage d'un ensemble de fonctions réseau virtuelles. L'allocation des ressources pour les services consiste en la recherche d'un placement, d'un routage et d'un ordonnancement adéquat de ces fonctions réseau virtuelles. Nous proposons une solution basée sur la théorie des jeux et sur l'apprentissage automatique pour trouver un emplacement et routage convenable ainsi qu'un ordonnancement approprié de ces fonctions en périphérie du réseau. La troisième contribution de cette thèse consiste en l'allocation des ressources pour les services véhiculaires en périphérie du réseau. Dans cette contribution, les services sont migrés et déplacés sur les différentes infrastructures en périphérie pour assurer la continuité des services. Les services véhiculaires sont en particulier sensibles à la latence et liés principalement à la sûreté et à la sécurité routière. En conséquence, la migration des services véhiculaires constitue une opération complexe. Nous proposons une approche basée sur l'apprentissage par renforcement profond pour migrer de manière proactive les différents services tout en assurant leur continuité sous les contraintes de mobilité élevée
    corecore