146 research outputs found

    Bidirectional LiFi Attocell Access Point Slicing Scheme

    Get PDF
    LiFi attocell access networks will be deployed everywhere to support diverse applications and service provisioning to various end-users. The LiFi infrastructure providers will need to offer LiFi access points (APs) resources as a service. This, however, requires a research challenge to be solved to dynamically and effectively allocate resources among service providers (SPs) while guaranteeing performance isolation among them and their respective users. This paper introduces an autonomic resource slicing (virtualization) scheme, which realizes autonomic management and configuration of virtual APs, in a LiFi attocell access network, based on SPs and their users service requirements. The developed scheme comprises of traffic analysis and classification, a local AP controller, downlink and uplink slice resources manager, traffic measurement, and information collection modules. It also contains a hybrid medium access protocol and an extended token bucket fair queueing algorithm to support uplink access virtualization and spectrum slicing. The proposed resource slicing scheme collects and analyzes the traffic statistics of the different applications supported on the slices defined in each LiFi AP and distributes the available resources fairly and proportionally among them. It uses a control algorithm to adjust the minimum contention window of user devices to achieve the target throughput and ensure airtime fairness among SPs and their users. The developed scheme has been extensively evaluated using OMNeT++. The obtained results show various resource slicing capabilities to support differentiated services and performance isolation

    Gestion flexible des ressources dans les réseaux de nouvelle génération avec SDN

    Get PDF
    Abstract : 5G and beyond-5G/6G are expected to shape the future economic growth of multiple vertical industries by providing the network infrastructure required to enable innovation and new business models. They have the potential to offer a wide spectrum of services, namely higher data rates, ultra-low latency, and high reliability. To achieve their promises, 5G and beyond-5G/6G rely on software-defined networking (SDN), edge computing, and radio access network (RAN) slicing technologies. In this thesis, we aim to use SDN as a key enabler to enhance resource management in next-generation networks. SDN allows programmable management of edge computing resources and dynamic orchestration of RAN slicing. However, achieving efficient performance based on SDN capabilities is a challenging task due to the permanent fluctuations of traffic in next-generation networks and the diversified quality of service requirements of emerging applications. Toward our objective, we address the load balancing problem in distributed SDN architectures, and we optimize the RAN slicing of communication and computation resources in the edge of the network. In the first part of this thesis, we present a proactive approach to balance the load in a distributed SDN control plane using the data plane component migration mechanism. First, we propose prediction models that forecast the load of SDN controllers in the long term. By using these models, we can preemptively detect whether the load will be unbalanced in the control plane and, thus, schedule migration operations in advance. Second, we improve the migration operation performance by optimizing the tradeoff between a load balancing factor and the cost of migration operations. This proactive load balancing approach not only avoids SDN controllers from being overloaded, but also allows a judicious selection of which data plane component should be migrated and where the migration should happen. In the second part of this thesis, we propose two RAN slicing schemes that efficiently allocate the communication and the computation resources in the edge of the network. The first RAN slicing scheme performs the allocation of radio resource blocks (RBs) to end-users in two time-scales, namely in a large time-scale and in a small time-scale. In the large time-scale, an SDN controller allocates to each base station a number of RBs from a shared radio RBs pool, according to its requirements in terms of delay and data rate. In the short time-scale, each base station assigns its available resources to its end-users and requests, if needed, additional resources from adjacent base stations. The second RAN slicing scheme jointly allocates the RBs and computation resources available in edge computing servers based on an open RAN architecture. We develop, for the proposed RAN slicing schemes, reinforcement learning and deep reinforcement learning algorithms to dynamically allocate RAN resources.La 5G et au-delà de la 5G/6G sont censées dessiner la future croissance économique de multiples industries verticales en fournissant l'infrastructure réseau nécessaire pour permettre l'innovation et la création de nouveaux modèles économiques. Elles permettent d'offrir un large spectre de services, à savoir des débits de données plus élevés, une latence ultra-faible et une fiabilité élevée. Pour tenir leurs promesses, la 5G et au-delà de la-5G/6G s'appuient sur le réseau défini par logiciel (SDN), l’informatique en périphérie et le découpage du réseau d'accès (RAN). Dans cette thèse, nous visons à utiliser le SDN en tant qu'outil clé pour améliorer la gestion des ressources dans les réseaux de nouvelle génération. Le SDN permet une gestion programmable des ressources informatiques en périphérie et une orchestration dynamique de découpage du RAN. Cependant, atteindre une performance efficace en se basant sur le SDN est une tâche difficile due aux fluctuations permanentes du trafic dans les réseaux de nouvelle génération et aux exigences de qualité de service diversifiées des applications émergentes. Pour atteindre notre objectif, nous abordons le problème de l'équilibrage de charge dans les architectures SDN distribuées, et nous optimisons le découpage du RAN des ressources de communication et de calcul à la périphérie du réseau. Dans la première partie de cette thèse, nous présentons une approche proactive pour équilibrer la charge dans un plan de contrôle SDN distribué en utilisant le mécanisme de migration des composants du plan de données. Tout d'abord, nous proposons des modèles pour prédire la charge des contrôleurs SDN à long terme. En utilisant ces modèles, nous pouvons détecter de manière préemptive si la charge sera déséquilibrée dans le plan de contrôle et, ainsi, programmer des opérations de migration à l'avance. Ensuite, nous améliorons les performances des opérations de migration en optimisant le compromis entre un facteur d'équilibrage de charge et le coût des opérations de migration. Cette approche proactive d'équilibrage de charge permet non seulement d'éviter la surcharge des contrôleurs SDN, mais aussi de choisir judicieusement le composant du plan de données à migrer et l'endroit où la migration devrait avoir lieu. Dans la deuxième partie de cette thèse, nous proposons deux mécanismes de découpage du RAN qui allouent efficacement les ressources de communication et de calcul à la périphérie des réseaux. Le premier mécanisme de découpage du RAN effectue l'allocation des blocs de ressources radio (RBs) aux utilisateurs finaux en deux échelles de temps, à savoir dans une échelle de temps large et dans une échelle de temps courte. Dans l’échelle de temps large, un contrôleur SDN attribue à chaque station de base un certain nombre de RB à partir d'un pool de RB radio partagé, en fonction de ses besoins en termes de délai et de débit. Dans l’échelle de temps courte, chaque station de base attribue ses ressources disponibles à ses utilisateurs finaux et demande, si nécessaire, des ressources supplémentaires aux stations de base adjacentes. Le deuxième mécanisme de découpage du RAN alloue conjointement les RB et les ressources de calcul disponibles dans les serveurs de l’informatique en périphérie en se basant sur une architecture RAN ouverte. Nous développons, pour les mécanismes de découpage du RAN proposés, des algorithmes d'apprentissage par renforcement et d'apprentissage par renforcement profond pour allouer dynamiquement les ressources du RAN

    SDN-based Flexible Resource Management and Service-Oriented Virtualization for 5G Mobile Networks and Beyond

    Get PDF
    This thesis examines how Software Defined Network (SDN) and Network Virtualization (NV) technologies can make 5G and beyond mobile networks more flexible, scalable and programmable to support the performance demands of the emerging heterogeneous applications. In this direction, concepts like mobile network slicing, multi-tenancy, and multi-connectivity have been investigated and their performance is analyzed. The SDN paradigm is used to enable flexible resource allocation to the end users, improve network resource utilization and avoid or rapidly solve the network congestion problems. The proposed network architectures are 3rd Generation Partnership Project (3GPP) standards compliant and integrate Open Network Foundation (ONF) SDN specifications to ensure seamless interoperability between different standards and backward/forward compatibility. Novel mechanisms and algorithms to efficiently manage the resources of evolving 5G Time-Division Duplex (TDD) networks in a flexible manner are introduced. These mechanisms enable formation of virtual cells on-demand which allows diverse resource utilization from multiple eNBs to the users. Within the scope of this thesis, SDN-based frameworks to enhance the QoE of end user applications considering Time Division-Long Term Evolution (TD-LTE) small cells have also been developed and network resource sharing scenarios with Frequency-Division Duplex (FDD)/TDD coexistence has been studied. In addition, this thesis also proposes and investigates a novel service-oriented network slicing concept for evolving 5G TDD networks which involve traffic prediction mechanisms and includes user mobility. An analytical model is also introduced that formulates the network slice resource allocation as a weighted optimization problem. The evaluations of the proposed solutions are performed using 3GPP standard compliant simulation settings. The proposed solutions have been compared with the state-of-the art schemes and the performance gains offered by the proposed solutions have been demonstrated. Performance is evaluated considering metrics such as throughput, delay, network resource utilization etc. The Mean Opinion Score (MOS) metric is used for evaluating the Quality of Experience (QoE) for end-user applications. With the help of SDN-based network management algorithms investigated in this work, it is shown how 5G+ networks can be managed efficiently, while at the same time provide enhanced flexibility and programmability to improve the performance of diverse applications and services delivered over the network to the end users

    Progressive introduction of network softwarization in operational telecom networks: advances at architectural, service and transport levels

    Get PDF
    Technological paradigms such as Software Defined Networking, Network Function Virtualization and Network Slicing are altogether offering new ways of providing services. This process is widely known as Network Softwarization, where traditional operational networks adopt capabilities and mechanisms inherit form the computing world, such as programmability, virtualization and multi-tenancy. This adoption brings a number of challenges, both from the technological and operational perspectives. On the other hand, they provide an unprecedented flexibility opening opportunities to developing new services and new ways of exploiting and consuming telecom networks. This Thesis first overviews the implications of the progressive introduction of network softwarization in operational networks for later on detail some advances at different levels, namely architectural, service and transport levels. It is done through specific exemplary use cases and evolution scenarios, with the goal of illustrating both new possibilities and existing gaps for the ongoing transition towards an advanced future mode of operation. This is performed from the perspective of a telecom operator, paying special attention on how to integrate all these paradigms into operational networks for assisting on their evolution targeting new, more sophisticated service demands.Programa de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Eduardo Juan Jacob Taquet.- Secretario: Francisco Valera Pintor.- Vocal: Jorge López Vizcaín

    Admission Control Optimisation for QoS and QoE Enhancement in Future Networks

    Get PDF
    Recent exponential growth in demand for traffic heterogeneity support and the number of associated devices has considerably increased demand for network resources and induced numerous challenges for the networks, such as bottleneck congestion, and inefficient admission control and resource allocation. Challenges such as these degrade network Quality of Service (QoS) and user-perceived Quality of Experience (QoE). This work studies admission control from various perspectives. For example, two novel single-objective optimisation-based admission control models, Dynamica Slice Allocation and Admission Control (DSAAC) and Signalling and Admission Control (SAC), are presented to enhance future limited-capacity network Grade of Service (GoS), and for control signalling optimisation, respectively. DSAAC is an integrated model whereby a cost-estimation function based on user demand and network capacity quantifies resource allocation among users. Moreover, to maximise resource utility, adjustable minimum and maximum slice resource bounds have also been derived. In the case of user blocking from the primary slice due to congestion or resource scarcity, a set of optimisation algorithms on inter-slice admission control and resource allocation and adaptability of slice elasticity have been proposed. A novel SAC model uses an unsupervised learning technique (i.e. Ranking-based clustering) for optimal clustering based on users’ homogeneous demand characteristics to minimise signalling redundancy in the access network. The redundant signalling reduction reduces the additional burden on the network in terms of unnecessary resource utilisation and computational time. Moreover, dynamically reconfigurable QoE-based slice performance bounds are also derived in the SAC model from multiple demand characteristics for clustered user admission to the optimal network. A set of optimisation algorithms are also proposed to attain efficient slice allocation and users’ QoE enhancement via assessing the capability of slice QoE elasticity. An enhancement of the SAC model is proposed through a novel multi-objective optimisation model named Edge Redundancy Minimisation and Admission Control (E-RMAC). A novel E-RMAC model for the first time considers the issue of redundant signalling between the edge and core networks. This model minimises redundant signalling using two classical unsupervised learning algorithms, K-mean and Ranking-based clustering, and maximises the efficiency of the link (bandwidth resources) between the edge and core networks. For multi-operator environments such as Open-RAN, a novel Forecasting and Admission Control (FAC) model for tenant-aware network selection and configuration is proposed. The model features a dynamic demand-estimation scheme embedded with fuzzy-logic-based optimisation for optimal network selection and admission control. FAC for the first time considers the coexistence of the various heterogeneous cellular technologies (2G, 3G,4G, and 5G) and their integration to enhance overall network throughput by efficient resource allocation and utilisation within a multi-operator environment. A QoS/QoE-based service monitoring feature is also presented to update the demand estimates with the support of a forecasting modifier. he provided service monitoring feature helps resource allocation to tenants, approximately closer to the actual demand of the tenants, to improve tenant-acquired QoE and overall network performance. Foremost, a novel and dynamic admission control model named Slice Congestion and Admission Control (SCAC) is also presented in this thesis. SCAC employs machine learning (i.e. unsupervised, reinforcement, and transfer learning) and multi-objective optimisation techniques (i.e. Non-dominated Sorting Genetic Algorithm II ) to minimise bottleneck and intra-slice congestion. Knowledge transfer among requests in form of coefficients has been employed for the first time for optimal slice requests queuing. A unified cost estimation function is also derived in this model for slice selection to ensure fairness among slice request admission. In view of instantaneous network circumstances and load, a reinforcement learning-based admission control policy is established for taking appropriate action on guaranteed soft and best-effort slice requests admissions. Intra-slice, as well as inter-slice resource allocation, along with the adaptability of slice elasticity, are also proposed for maximising slice acceptance ratio and resource utilisation. Extensive simulation results are obtained and compared with similar models found in the literature. The proposed E-RMAC model is 35% superior at reducing redundant signalling between the edge and core networks compared to recent work. The E-RMAC model reduces the complexity from O(U) to O(R) for service signalling and O(N) for resource signalling. This represents a significant saving in the uplink control plane signalling and link capacity compared to the results found in the existing literature. Similarly, the SCAC model reduces bottleneck congestion by approximately 56% over the entire load compared to ground truth and increases the slice acceptance ratio. Inter-slice admission and resource allocation offer admission gain of 25% and 51% over cooperative slice- and intra-slice-based admission control and resource allocation, respectively. Detailed analysis of the results obtained suggests that the proposed models can efficiently manage future heterogeneous traffic flow in terms of enhanced throughput, maximum network resources utilisation, better admission gain, and congestion control

    Distributed collaborative knowledge management for optical network

    Get PDF
    Network automation has been long time envisioned. In fact, the Telecommunications Management Network (TMN), defined by the International Telecommunication Union (ITU), is a hierarchy of management layers (network element, network, service, and business management), where high-level operational goals propagate from upper to lower layers. The network management architecture has evolved with the development of the Software Defined Networking (SDN) concept that brings programmability to simplify configuration (it breaks down high-level service abstraction into lower-level device abstractions), orchestrates operation, and automatically reacts to changes or events. Besides, the development and deployment of solutions based on Artificial Intelligence (AI) and Machine Learning (ML) for making decisions (control loop) based on the collected monitoring data enables network automation, which targets at reducing operational costs. AI/ML approaches usually require large datasets for training purposes, which are difficult to obtain. The lack of data can be compensated with a collective self-learning approach. In this thesis, we go beyond the aforementioned traditional control loop to achieve an efficient knowledge management (KM) process that enhances network intelligence while bringing down complexity. In this PhD thesis, we propose a general architecture to support KM process based on four main pillars, which enable creating, sharing, assimilating and using knowledge. Next, two alternative strategies based on model inaccuracies and combining model are proposed. To highlight the capacity of KM to adapt to different applications, two use cases are considered to implement KM in a purely centralized and distributed optical network architecture. Along with them, various policies are considered for evaluating KM in data- and model- based strategies. The results target to minimize the amount of data that need to be shared and reduce the convergence error. We apply KM to multilayer networks and propose the PILOT methodology for modeling connectivity services in a sandbox domain. PILOT uses active probes deployed in Central Offices (COs) to obtain real measurements that are used to tune a simulation scenario reproducing the real deployment with high accuracy. A simulator is eventually used to generate large amounts of realistic synthetic data for ML training and validation. We apply KM process also to a more complex network system that consists of several domains, where intra-domain controllers assist a broker plane in estimating accurate inter-domain delay. In addition, the broker identifies and corrects intra-domain model inaccuracies, as well as it computes an accurate compound model. Such models can be used for quality of service (QoS) and accurate end-to-end delay estimations. Finally, we investigate the application on KM in the context of Intent-based Networking (IBN). Knowledge in terms of traffic model and/or traffic perturbation is transferred among agents in a hierarchical architecture. This architecture can support autonomous network operation, like capacity management.La automatización de la red se ha concebido desde hace mucho tiempo. De hecho, la red de gestión de telecomunicaciones (TMN), definida por la Unión Internacional de Telecomunicaciones (ITU), es una jerarquía de capas de gestión (elemento de red, red, servicio y gestión de negocio), donde los objetivos operativos de alto nivel se propagan desde las capas superiores a las inferiores. La arquitectura de gestión de red ha evolucionado con el desarrollo del concepto de redes definidas por software (SDN) que brinda capacidad de programación para simplificar la configuración (descompone la abstracción de servicios de alto nivel en abstracciones de dispositivos de nivel inferior), organiza la operación y reacciona automáticamente a los cambios o eventos. Además, el desarrollo y despliegue de soluciones basadas en inteligencia artificial (IA) y aprendizaje automático (ML) para la toma de decisiones (bucle de control) en base a los datos de monitorización recopilados permite la automatización de la red, que tiene como objetivo reducir costes operativos. AI/ML generalmente requieren un gran conjunto de datos para entrenamiento, los cuales son difíciles de obtener. La falta de datos se puede compensar con un enfoque de autoaprendizaje colectivo. En esta tesis, vamos más allá del bucle de control tradicional antes mencionado para lograr un proceso eficiente de gestión del conocimiento (KM) que mejora la inteligencia de la red al tiempo que reduce la complejidad. En esta tesis doctoral, proponemos una arquitectura general para apoyar el proceso de KM basada en cuatro pilares principales que permiten crear, compartir, asimilar y utilizar el conocimiento. A continuación, se proponen dos estrategias alternativas basadas en inexactitudes del modelo y modelo de combinación. Para resaltar la capacidad de KM para adaptarse a diferentes aplicaciones, se consideran dos casos de uso para implementar KM en una arquitectura de red óptica puramente centralizada y distribuida. Junto a ellos, se consideran diversas políticas para evaluar KM en estrategias basadas en datos y modelos. Los resultados apuntan a minimizar la cantidad de datos que deben compartirse y reducir el error de convergencia. Aplicamos KM a redes multicapa y proponemos la metodología PILOT para modelar servicios de conectividad en un entorno aislado. PILOT utiliza sondas activas desplegadas en centrales de telecomunicación (CO) para obtener medidas reales que se utilizan para ajustar un escenario de simulación que reproducen un despliegue real con alta precisión. Un simulador se utiliza finalmente para generar grandes cantidades de datos sintéticos realistas para el entrenamiento y la validación de ML. Aplicamos el proceso de KM también a un sistema de red más complejo que consta de varios dominios, donde los controladores intra-dominio ayudan a un plano de bróker a estimar el retardo entre dominios de forma precisa. Además, el bróker identifica y corrige las inexactitudes de los modelos intra-dominio, así como también calcula un modelo compuesto preciso. Estos modelos se pueden utilizar para estimar la calidad de servicio (QoS) y el retardo extremo a extremo de forma precisa. Finalmente, investigamos la aplicación en KM en el contexto de red basada en intención (IBN). El conocimiento en términos de modelo de tráfico y/o perturbación del tráfico se transfiere entre agentes en una arquitectura jerárquica. Esta arquitectura puede soportar el funcionamiento autónomo de la red, como la gestión de la capacidad.Postprint (published version

    An intelligent call admission control algorithm for load balancing in 5G-satellite networks

    Get PDF
    A thesis submitted in partial fulfilment of the requirements of the University of Wolverhampton for the degree of Doctor of Philosophy.Cellular networks are projected to deal with an immense rise in data traffic, as well as an enormous and diverse device, plus advanced use cases, in the nearest future; hence, future 5G networks are being developed to consist of not only 5G but also different RATs integrated. In addition to 5G, the user’s device (UD) will be able to connect to the network via LTE, WiMAX, Wi-Fi, Satellite, and other technologies. On the other hand, Satellite has been suggested as a preferred network to support 5G use cases. Satellite networks are among the most sophisticated communication technologies which offer specific benefits in geographically dispersed and dynamic networks. Utilising their inherent advantages in broadcasting capabilities, global coverage, decreased dependency on terrestrial infrastructure, and high security, they offer highly efficient, effective, and rapid network deployments. Satellites are more suited for large-scale communications than terrestrial communication networks. Due to their extensive service coverage and strong multilink transmission capabilities, satellites offer global high-speed connectivity and adaptable access systems. The convergence of 5G technology and satellite networks therefore marks a significant milestone in the evolution of global connectivity. However, this integration introduces a complex problem related to resource management, particularly in Satellite – Terrestrial Integrated Networks (STINs). The key issue at hand is the efficient allocation of resources in STINs to enhance Quality of Service (QoS) for users. The root cause of this issue originates from a vast quantity of users sharing these resources, the dynamic nature of generated traffic, the scarcity of wireless spectrum resources, and the random allocation of wireless channels. Hence, resource allocation is critical to ensure user satisfaction, fair traffic distribution, maximised throughput, and minimised congestion. Achieving load balancing is essential to guarantee an equal amount of traffic distributed between different RATs in a heterogeneous wireless network; this would enable optimal utilisation of the radio resources and lower the likelihood of call blocking/dropping. This research endeavours to address this challenge through the development and evaluation of an intelligent call admission control (CAC) algorithm based on Enhanced Particle Swarm Optimization (EPSO). The primary aim of this research is to design an EPSO-based CAC algorithm tailored specifically for 5G-satellite heterogeneous wireless networks. The algorithm's objectives include maximising the number of admitted calls while maintaining Quality of Service (QoS) for existing users, improving network resource utilization, reducing congestion, ensuring fairness, and enhancing user satisfaction. To achieve these objectives, a detailed research methodology is outlined, encompassing algorithm development, numerical simulations, and comparative analysis. The proposed EPSO algorithm is benchmarked against alternative artificial intelligence and machine learning algorithms, including the Artificial Bee Colony algorithm, Simulated Annealing algorithm, and Q-Learning algorithm. Performance metrics such as throughput, call blocking rates, and fairness are employed to evaluate the algorithms' efficacy in achieving load-balancing objectives. The experimental findings yield insights into the performance of the EPSO-based CAC algorithm and its comparative advantages over alternative techniques. Through rigorous analysis, this research elucidates the EPSO algorithm's strengths in dynamically adapting to changing network conditions, optimising resource allocation, and ensuring equitable distribution of traffic among different RATs. The result shows the EPSO algorithm outperforms the other 3 algorithms in all the scenarios. The contributions of this thesis extend beyond academic research, with potential societal implications including enhanced connectivity, efficiency, and user experiences in 5G-Satellite heterogeneous wireless networks. By advancing intelligent resource management techniques, this research paves the way for improved network performance and reliability in the evolving landscape of wireless communication

    The 6G Architecture Landscape:European Perspective

    Get PDF

    Mobile cloud computing and network function virtualization for 5g systems

    Get PDF
    The recent growth of the number of smart mobile devices and the emergence of complex multimedia mobile applications have brought new challenges to the design of wireless mobile networks. The envisioned Fifth-Generation (5G) systems are equipped with different technical solutions that can accommodate the increasing demands for high date rate, latency-limited, energy-efficient and reliable mobile communication networks. Mobile Cloud Computing (MCC) is a key technology in 5G systems that enables the offloading of computationally heavy applications, such as for augmented or virtual reality, object recognition, or gaming from mobile devices to cloudlet or cloud servers, which are connected to wireless access points, either directly or through finite-capacity backhaul links. Given the battery-limited nature of mobile devices, mobile cloud computing is deemed to be an important enabler for the provision of such advanced applications. However, computational tasks offloading, and due to the variability of the communication network through which the cloud or cloudlet is accessed, may incur unpredictable energy expenditure or intolerable delay for the communications between mobile devices and the cloud or cloudlet servers. Therefore, the design of a mobile cloud computing system is investigated by jointly optimizing the allocation of radio, computational resources and backhaul resources in both uplink and downlink directions. Moreover, the users selected for cloud offloading need to have an energy consumption that is smaller than the amount required for local computing, which is achieved by means of user scheduling. Motivated by the application-centric drift of 5G systems and the advances in smart devices manufacturing technologies, new brand of mobile applications are developed that are immersive, ubiquitous and highly-collaborative in nature. For example, Augmented Reality (AR) mobile applications have inherent collaborative properties in terms of data collection in the uplink, computing at the cloud, and data delivery in the downlink. Therefore, the optimization of the shared computing and communication resources in MCC not only benefit from the joint allocation of both resources, but also can be more efficiently enhanced by sharing the offloaded data and computations among multiple users. As a result, a resource allocation approach whereby transmitted, received and processed data are shared partially among the users leads to more efficient utilization of the communication and computational resources. As a suggested architecture in 5G systems, MCC decouples the computing functionality from the platform location through the use of software virtualization to allow flexible provisioning of the provided services. Another virtualization-based technology in 5G systems is Network Function Virtualization (NFV) which prescribes the instantiation of network functions on general-purpose network devices, such as servers and switches. While yielding a more flexible and cost-effective network architecture, NFV is potentially limited by the fact that commercial off-the-shelf hardware is less reliable than the dedicated network elements used in conventional cellular deployments. The typical solution for this problem is to duplicate network functions across geographically distributed hardware in order to ensure diversity. For that reason, the development of fault-tolerant virtualization strategies for MCC and NFV is necessary to ensure reliability of the provided services
    • …
    corecore