58 research outputs found

    Resource Management From Single-domain 5G to End-to-End 6G Network Slicing:A Survey

    Get PDF
    Network Slicing (NS) is one of the pillars of the fifth/sixth generation (5G/6G) of mobile networks. It provides the means for Mobile Network Operators (MNOs) to leverage physical infrastructure across different technological domains to support different applications. This survey analyzes the progress made on NS resource management across these domains, with a focus on the interdependence between domains and unique issues that arise in cross-domain and End-to-End (E2E) settings. Based on a generic problem formulation, NS resource management functionalities (e.g., resource allocation and orchestration) are examined across domains, revealing their limits when applied separately per domain. The appropriateness of different problem-solving methodologies is critically analyzed, and practical insights are provided, explaining how resource management should be rethought in cross-domain and E2E contexts. Furthermore, the latest advancements are reported through a detailed analysis of the most relevant research projects and experimental testbeds. Finally, the core issues facing NS resource management are dissected, and the most pertinent research directions are identified, providing practical guidelines for new researchers.<br/

    Learning Augmented Optimization for Network Softwarization in 5G

    Get PDF
    The rapid uptake of mobile devices and applications are posing unprecedented traffic burdens on the existing networking infrastructures. In order to maximize both user experience and investment return, the networking and communications systems are evolving to the next gen- eration – 5G, which is expected to support more flexibility, agility, and intelligence towards provisioned services and infrastructure management. Fulfilling these tasks is challenging, as nowadays networks are increasingly heterogeneous, dynamic and expanded with large sizes. Network softwarization is one of the critical enabling technologies to implement these requirements in 5G. In addition to these problems investigated in preliminary researches about this technology, many new emerging application requirements and advanced opti- mization & learning technologies are introducing more challenges & opportunities for its fully application in practical production environment. This motivates this thesis to develop a new learning augmented optimization technology, which merges both the advanced opti- mization and learning techniques to meet the distinct characteristics of the new application environment. To be more specific, the abstracts of the key contents in this thesis are listed as follows: • We first develop a stochastic solution to augment the optimization of the Network Function Virtualization (NFV) services in dynamical networks. In contrast to the dominant NFV solutions applied for the deterministic networking environments, the inherent network dynamics and uncertainties from 5G infrastructure are impeding the rollout of NFV in many emerging networking applications. Therefore, Chapter 3 investigates the issues of network utility degradation when implementing NFV in dynamical networks, and proposes a robust NFV solution with full respect to the underlying stochastic features. By exploiting the hierarchical decision structures in this problem, a distributed computing framework with two-level decomposition is designed to facilitate a distributed implementation of the proposed model in large-scale networks. • Next, Chapter 4 aims to intertwin the traditional optimization and learning technologies. In order to reap the merits of both optimization and learning technologies but avoid their limitations, promissing integrative approaches are investigated to combine the traditional optimization theories with advanced learning methods. Subsequently, an online optimization process is designed to learn the system dynamics for the network slicing problem, another critical challenge for network softwarization. Specifically, we first present a two-stage slicing optimization model with time-averaged constraints and objective to safeguard the network slicing operations in time-varying networks. Directly solving an off-line solution to this problem is intractable since the future system realizations are unknown before decisions. To address this, we combine the historical learning and Lyapunov stability theories, and develop a learning augmented online optimization approach. This facilitates the system to learn a safe slicing solution from both historical records and real-time observations. We prove that the proposed solution is always feasible and nearly optimal, up to a constant additive factor. Finally, simulation experiments are also provided to demonstrate the considerable improvement of the proposals. • The success of traditional solutions to optimizing the stochastic systems often requires solving a base optimization program repeatedly until convergence. For each iteration, the base program exhibits the same model structure, but only differing in their input data. Such properties of the stochastic optimization systems encourage the work of Chapter 5, in which we apply the latest deep learning technologies to abstract the core structures of an optimization model and then use the learned deep learning model to directly generate the solutions to the equivalent optimization model. In this respect, an encoder-decoder based learning model is developed in Chapter 5 to improve the optimization of network slices. In order to facilitate the solving of the constrained combinatorial optimization program in a deep learning manner, we design a problem-specific decoding process by integrating program constraints and problem context information into the training process. The deep learning model, once trained, can be used to directly generate the solution to any specific problem instance. This avoids the extensive computation in traditional approaches, which re-solve the whole combinatorial optimization problem for every instance from the scratch. With the help of the REINFORCE gradient estimator, the obtained deep learning model in the experiments achieves significantly reduced computation time and optimality loss

    Machine Learning for Performance Aware Virtual Network Function Placement

    Get PDF
    With the growing demand for data connectivity, network service providers are faced with the task of reducing their capital and operational expenses while simultaneously improving network performance and addressing the increased connectivity demand. Although Network Function Virtualization has been identified as a potential solution, several challenges must be addressed to ensure its feasibility. The work presented in this thesis addresses the Virtual Network Function (VNF) placement problem through the development of a machine learning-based Delay-Aware Tree (DAT) which learns from the previous placement of VNF instances forming a Service Function Chain. The DAT is able to predict VNF instance placements with an average 34μs of additional delay when compared to the near-optimal BACON heuristic VNF placement algorithm. The DAT’s max depth hyperparameter is then optimized using Particle Swarm Optimization (PSO) and its performance is improved by an average of 44μs through the introduction of the Depth-Optimized Delay-Aware Tree (DO-DAT)

    Fatias de rede fim-a-fim : da extração de perfis de funções de rede a SLAs granulares

    Get PDF
    Orientador: Christian Rodolfo Esteve RothenbergTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: Nos últimos dez anos, processos de softwarização de redes vêm sendo continuamente diversi- ficados e gradativamente incorporados em produção, principalmente através dos paradigmas de Redes Definidas por Software (ex.: regras de fluxos de rede programáveis) e Virtualização de Funções de Rede (ex.: orquestração de funções virtualizadas de rede). Embasado neste processo o conceito de network slice surge como forma de definição de caminhos de rede fim- a-fim programáveis, possivelmente sobre infrastruturas compartilhadas, contendo requisitos estritos de desempenho e dedicado a um modelo particular de negócios. Esta tese investiga a hipótese de que a desagregação de métricas de desempenho de funções virtualizadas de rede impactam e compõe critérios de alocação de network slices (i.e., diversas opções de utiliza- ção de recursos), os quais quando realizados devem ter seu gerenciamento de ciclo de vida implementado de forma transparente em correspondência ao seu caso de negócios de comu- nicação fim-a-fim. A verificação de tal assertiva se dá em três aspectos: entender os graus de liberdade nos quais métricas de desempenho de funções virtualizadas de rede podem ser expressas; métodos de racionalização da alocação de recursos por network slices e seus re- spectivos critérios; e formas transparentes de rastrear e gerenciar recursos de rede fim-a-fim entre múltiplos domínios administrativos. Para atingir estes objetivos, diversas contribuições são realizadas por esta tese, dentre elas: a construção de uma plataforma para automatização de metodologias de testes de desempenho de funções virtualizadas de redes; a elaboração de uma metodologia para análises de alocações de recursos de network slices baseada em um algoritmo classificador de aprendizado de máquinas e outro algoritmo de análise multi- critério; e a construção de um protótipo utilizando blockchain para a realização de contratos inteligentes envolvendo acordos de serviços entre domínios administrativos de rede. Por meio de experimentos e análises sugerimos que: métricas de desempenho de funções virtualizadas de rede dependem da alocação de recursos, configurações internas e estímulo de tráfego de testes; network slices podem ter suas alocações de recursos coerentemente classificadas por diferentes critérios; e acordos entre domínios administrativos podem ser realizados de forma transparente e em variadas formas de granularidade por meio de contratos inteligentes uti- lizando blockchain. Ao final deste trabalho, com base em uma ampla discussão as perguntas de pesquisa associadas à hipótese são respondidas, de forma que a avaliação da hipótese proposta seja realizada perante uma ampla visão das contribuições e trabalhos futuros desta teseAbstract: In the last ten years, network softwarisation processes have been continuously diversified and gradually incorporated into production, mainly through the paradigms of Software Defined Networks (e.g., programmable network flow rules) and Network Functions Virtualization (e.g., orchestration of virtualized network functions). Based on this process, the concept of network slice emerges as a way of defining end-to-end network programmable paths, possibly over shared network infrastructures, requiring strict performance metrics associated to a par- ticular business case. This thesis investigate the hypothesis that the disaggregation of network function performance metrics impacts and composes a network slice footprint incurring in di- verse slicing feature options, which when realized should have their Service Level Agreement (SLA) life cycle management transparently implemented in correspondence to their fulfilling end-to-end communication business case. The validation of such assertive takes place in three aspects: the degrees of freedom by which performance of virtualized network functions can be expressed; the methods of rationalizing the footprint of network slices; and transparent ways to track and manage network assets among multiple administrative domains. In order to achieve such goals, a series of contributions were achieved by this thesis, among them: the construction of a platform for automating methodologies for performance testing of virtual- ized network functions; an elaboration of a methodology for the analysis of footprint features of network slices based on a machine learning classifier algorithm and a multi-criteria analysis algorithm; and the construction of a prototype using blockchain to carry out smart contracts involving service level agreements between administrative systems. Through experiments and analysis we suggest that: performance metrics of virtualized network functions depend on the allocation of resources, internal configurations and test traffic stimulus; network slices can have their resource allocations consistently analyzed/classified by different criteria; and agree- ments between administrative domains can be performed transparently and in various forms of granularity through blockchain smart contracts. At the end of his thesis, through a wide discussion we answer all the research questions associated to the investigated hypothesis in such way its evaluation is performed in face of wide view of the contributions and future work of this thesisDoutoradoEngenharia de ComputaçãoDoutor em Engenharia ElétricaFUNCAM

    Towards edge robotics: the progress from cloud-based robotic systems to intelligent and context-aware robotic services

    Get PDF
    Current robotic systems handle a different range of applications such as video surveillance, delivery of goods, cleaning, material handling, assembly, painting, or pick and place services. These systems have been embraced not only by the general population but also by the vertical industries to help them in performing daily activities. Traditionally, the robotic systems have been deployed in standalone robots that were exclusively dedicated to performing a specific task such as cleaning the floor in indoor environments. In recent years, cloud providers started to offer their infrastructures to robotic systems for offloading some of the robot’s functions. This ultimate form of the distributed robotic system was first introduced 10 years ago as cloud robotics and nowadays a lot of robotic solutions are appearing in this form. As a result, standalone robots became software-enhanced objects with increased reconfigurability as well as decreased complexity and cost. Moreover, by offloading the heavy processing from the robot to the cloud, it is easier to share services and information from various robots or agents to achieve better cooperation and coordination. Cloud robotics is suitable for human-scale responsive and delay-tolerant robotic functionalities (e.g., monitoring, predictive maintenance). However, there is a whole set of real-time robotic applications (e.g., remote control, motion planning, autonomous navigation) that can not be executed with cloud robotics solutions, mainly because cloud facilities traditionally reside far away from the robots. While the cloud providers can ensure certain performance in their infrastructure, very little can be ensured in the network between the robots and the cloud, especially in the last hop where wireless radio access networks are involved. Over the last years advances in edge computing, fog computing, 5G NR, network slicing, Network Function Virtualization (NFV), and network orchestration are stimulating the interest of the industrial sector to satisfy the stringent and real-time requirements of their applications. Robotic systems are a key piece in the industrial digital transformation and their benefits are very well studied in the literature. However, designing and implementing a robotic system that integrates all the emerging technologies and meets the connectivity requirements (e.g., latency, reliability) is an ambitious task. This thesis studies the integration of modern Information andCommunication Technologies (ICTs) in robotic systems and proposes some robotic enhancements that tackle the real-time constraints of robotic services. To evaluate the performance of the proposed enhancements, this thesis departs from the design and prototype implementation of an edge native robotic system that embodies the concepts of edge computing, fog computing, orchestration, and virtualization. The proposed edge robotics system serves to represent two exemplary robotic applications. In particular, autonomous navigation of mobile robots and remote-control of robot manipulator where the end-to-end robotic system is distributed between the robots and the edge server. The open-source prototype implementation of the designed edge native robotic system resulted in the creation of two real-world testbeds that are used in this thesis as a baseline scenario for the evaluation of new innovative solutions in robotic systems. After detailing the design and prototype implementation of the end-to-end edge native robotic system, this thesis proposes several enhancements that can be offered to robotic systems by adapting the concept of edge computing via the Multi-Access Edge Computing (MEC) framework. First, it proposes exemplary network context-aware enhancements in which the real-time information about robot connectivity and location can be used to dynamically adapt the end-to-end system behavior to the actual status of the communication (e.g., radio channel). Three different exemplary context-aware enhancements are proposed that aim to optimize the end-to-end edge native robotic system. Later, the thesis studies the capability of the edge native robotic system to offer potential savings by means of computation offloading for robot manipulators in different deployment configurations. Further, the impact of different wireless channels (e.g., 5G, 4G andWi-Fi) to support the data exchange between a robot manipulator and its remote controller are assessed. In the following part of the thesis, the focus is set on how orchestration solutions can support mobile robot systems to make high quality decisions. The application of OKpi as an orchestration algorithm and DLT-based federation are studied to meet the KPIs that autonomously controlledmobile robots have in order to provide uninterrupted connectivity over the radio access network. The elaborated solutions present high compatibility with the designed edge robotics system where the robot driving range is extended without any interruption of the end-to-end edge robotics service. While the DLT-based federation extends the robot driving range by deploying access point extension on top of external domain infrastructure, OKpi selects the most suitable access point and computing resource in the cloud-to-thing continuum in order to fulfill the latency requirements of autonomously controlled mobile robots. To conclude the thesis the focus is set on how robotic systems can improve their performance by leveraging Artificial Intelligence (AI) and Machine Learning (ML) algorithms to generate smart decisions. To do so, the edge native robotic system is presented as a true embodiment of a Cyber-Physical System (CPS) in Industry 4.0, showing the mission of AI in such concept. It presents the key enabling technologies of the edge robotic system such as edge, fog, and 5G, where the physical processes are integrated with computing and network domains. The role of AI in each technology domain is identified by analyzing a set of AI agents at the application and infrastructure level. In the last part of the thesis, the movement prediction is selected to study the feasibility of applying a forecast-based recovery mechanism for real-time remote control of robotic manipulators (FoReCo) that uses ML to infer lost commands caused by interference in the wireless channel. The obtained results are showcasing the its potential in simulation and real-world experimentation.Programa de Doctorado en Ingeniería Telemática por la Universidad Carlos III de MadridPresidente: Karl Holger.- Secretario: Joerg Widmer.- Vocal: Claudio Cicconett

    Artificial intelligence empowered virtual network function deployment and service function chaining for next-generation networks

    Get PDF
    The entire Internet of Things (IoT) ecosystem is directing towards a high volume of diverse applications. From smart healthcare to smart cities, every ubiquitous digital sector provisions automation for an immersive experience. Augmented/Virtual reality, remote surgery, and autonomous driving expect high data rates and ultra-low latency. The Network Function Virtualization (NFV) based IoT infrastructure of decoupling software services from proprietary devices has been extremely popular due to cutting back significant deployment and maintenance expenditure in the telecommunication industry. Another substantially highlighted technological trend for delaysensitive IoT applications has emerged as multi-access edge computing (MEC). MEC brings NFV to the network edge (in closer proximity to users) for faster computation. Among the massive pool of IoT services in NFV context, the urgency for efficient edge service orchestration is constantly growing. The emerging challenges are addressed as collaborative optimization of resource utilities and ensuring Quality-ofService (QoS) with prompt orchestration in dynamic, congested, and resource-hungry IoT networks. Traditional mathematical programming models are NP-hard, hence inappropriate for time-sensitive IoT environments. In this thesis, we promote the need to go beyond the realms and leverage artificial intelligence (AI) based decision-makers for “smart” service management. We offer different methods of integrating supervised and reinforcement learning techniques to support future-generation wireless network optimization problems. Due to the combinatorial explosion of some service orchestration problems, supervised learning is more superior to reinforcement learning performance-wise. Unfortunately, open access and standardized datasets for this research area are still in their infancy. Thus, we utilize the optimal results retrieved by Integer Linear Programming (ILP) for building labeled datasets to train supervised models (e.g., artificial neural networks, convolutional neural networks). Furthermore, we find that ensemble models are better than complex single networks for control layer intelligent service orchestration. Contrarily, we employ Deep Q-learning (DQL) for heavily constrained service function chaining optimization. We carefully address key performance indicators (e.g., optimality gap, service time, relocation and communication costs, resource utilization, scalability intelligence) to evaluate the viability of prospective orchestration schemes. We envision that AI-enabled network management can be regarded as a pioneering tread to scale down massive IoT resource fabrication costs, upgrade profit margin for providers, and sustain QoS mutuall

    Softwarization of Large-Scale IoT-based Disasters Management Systems

    Get PDF
    The Internet of Things (IoT) enables objects to interact and cooperate with each other for reaching common objectives. It is very useful in large-scale disaster management systems where humans are likely to fail when they attempt to perform search and rescue operations in high-risk sites. IoT can indeed play a critical role in all phases of large-scale disasters (i.e. preparedness, relief, and recovery). Network softwarization aims at designing, architecting, deploying, and managing network components primarily based on software programmability properties. It relies on key technologies, such as cloud computing, Network Functions Virtualization (NFV), and Software Defined Networking (SDN). The key benefits are agility and cost efficiency. This thesis proposes softwarization approaches to tackle the key challenges related to large-scale IoT based disaster management systems. A first challenge faced by large-scale IoT disaster management systems is the dynamic formation of an optimal coalition of IoT devices for the tasks at hand. Meeting this challenge is critical for cost efficiency. A second challenge is an interoperability. IoT environments remain highly heterogeneous. However, the IoT devices need to interact. Yet another challenge is Quality of Service (QoS). Disaster management applications are known to be very QoS sensitive, especially when it comes to delay. To tackle the first challenge, we propose a cloud-based architecture that enables the formation of efficient coalitions of IoT devices for search and rescue tasks. The proposed architecture enables the publication and discovery of IoT devices belonging to different cloud providers. It also comes with a coalition formation algorithm. For the second challenge, we propose an NFV and SDN based - architecture for on-the-fly IoT gateway provisioning. The gateway functions are provisioned as Virtual Network Functions (VNFs) that are chained on-the-fly in the IoT domain using SDN. When it comes to the third challenge, we rely on fog computing to meet the QoS and propose algorithms that provision IoT applications components in hybrid NFV based - cloud/fogs. Both stationary and mobile fog nodes are considered. In the case of mobile fog nodes, a Tabu Search-based heuristic is proposed. It finds a near-optimal solution and we numerically show that it is faster than the Integer Linear Programming (ILP) solution by several orders of magnitude

    Virtual Service Provisioning for Internet of Things Applications in Mobile Edge Computing

    Get PDF
    The Internet of Things (IoT) paradigm is paving the way for many new emerging technologies, such as smart grid, industry 4.0, connected cars, smart cities, etc. Mobile Edge Computing (MEC) provides promising solutions to reduce service delays for delay-sensitive IoT applications, where cloudlets are co-located with wireless access points in the proximity of IoT devices. Most mobile users have specified Service Function Chain (SFC) requirements, where an SFC is a sequence of Virtual Network Functions (VNFs). Meanwhile, edge intelligence arises to provision real-time deep neural network (DNN) inference services for users. To accelerate the processing of the DNN inference of a request in an MEC network, the DNN inference model usually can be partitioned into two connected parts: one part is processed on the local IoT device of the request; and the other part is processed on a cloudlet (server) in the MEC network. Also, the DNN inference can be further accelerated by allocating multiple threads of the cloudlet in which the request is assigned. In this thesis, we will focus on virtual service provisioning for IoT applications in MEC Environments. Firstly, we consider the user satisfaction problem of using services jointly provided by an MEC network and a remote cloud for delay-sensitive IoT applications, through maximizing the accumulative user satisfaction when different user services have different service delay requirements. A novel metric to measure user satisfaction of using a service is proposed, and efficient approximation and online algorithms for the defined problems under both static and dynamic user service demands are then devised and analyzed. Secondly, we study service provisioning in an MEC network for multi-source IoT applications with SFC requirements with the aim of minimizing service provisioning cost, where each IoT application has multiple data streams from different sources to be uploaded to the MEC network for processing and storage, while each data stream must pass through the network functions of the SFC of the IoT application, prior to reaching its destination. A service provisioning framework for such multi-source IoT applications is proposed, through uploading stream data from multiple IoT sources, VNF instance placement and sharing, in-network aggregation of data streams, and workload balancing among cloudlets. Efficient algorithms for service provisioning of multi-source IoT applications in MEC networks, built upon the proposed framework, are also proposed. Thirdly, we investigate a novel DNN inference throughput maximization problem in an MEC network with the aim to maximize the number of delay-aware DNN service requests admitted, by accelerating each DNN inference through jointly exploring DNN partitioning and inference parallelism. We devise a constant approximation algorithm for the problem under the offline setting, and an online algorithm with a provable competitive ratio for the problem under the online setting, respectively. Fourthly, we address a robust SFC placement problem with the aim to maximize the expected profit collected by the service provider of an MEC network, under the assumption of both computing resource and data rate demand uncertainties. We start with a special case of the problem where the measurement of the expected demanded resources for each request admission is accurate, under which we propose a near-optimal approximation algorithm for the problem by adopting the Markov approximation technique, which can achieve a provable optimality gap. Then, we extend the proposed approach to the problem of concern, for which we show that the proposed algorithm still is applicable, and the solution delivered has a moderate optimality gap with bounded perturbation errors on the profit measurement. Finally, we summarize the thesis work and explore several potential research topics that are based on the studies in this thesis

    Using OSM for real-time redeployment of VNFs based on network status

    Get PDF
    Στην παρούσα διπλωματική εργασία θα εξετάσουμε την Εικονικοποίηση δικτυακών λειτουργιών (Network Functions Virtualisation - NFV) ως την κατάλληλη αρχιτεκτονική για την υλοποίηση ενός δικτύου κατάλληλου για το Διαδίκτυο των Πραγμάτων (Internet of Things - IoT), το οποίο πρέπει να είναι ευέλικτο και επεκτάσιμο. Πιο συγκεκριμένα, θα επικεντρωθούμε στην αποτελεσματική αξιοποίηση του Open Source MANO (OSM) στην υλοποίηση μιας εφαρμογής που παρακολουθεί την κατάσταση του δικτύου των Εικονικοποιημένων δικτυακών λειτουργιών (Virtual Network Functions – VNFs) και σε περίπτωση κακής κατάστασης του δικτύου (π.χ. συμφόρηση του δικτύου) αναλαμβάνει τη μετακίνηση των επηρεαζόμενων VNFs σε κάποιον άλλο Διαχειριστή Εικονικής Υποδομής (Virtual Infrastructure Manager – VIM), για να αποτραπεί η πτώση στην απόδοση των ενεργών υπηρεσιών.In this thesis we will be examining the Network Functions Virtualisation (NFV) framework as a suitable framework for implementing a network appropriate for Internet of Things (IoT), which needs to be flexible and scalable. More precisely, we will be focusing on how Open Source MANO (OSM) can be efficiently utilized in a solution that monitors the network status of Virtual Network Functions (VNFs) and in case of bad network status (e.g. network congestion) triggers the redeployment of affected VNFs to some other Virtual Infrastructure Manager (VIM) to prevent the underperformance of running services

    Resource Allocation in Next Generation Mobile Networks

    Get PDF
    The increasing heterogeneity of the mobile network infrastructure together with the explosively growing demand for bandwidth-hungry services with diverse quality of service (QoS) requirements leads to a degradation in the performance of traditional networks. To address this issue in next-generation mobile networks (NGMN), various technologies such as software-defined networking (SDN), network function virtualization (NFV), mobile edge/cloud computing (MEC/MCC), non-terrestrial networks (NTN), and edge ML are essential. Towards this direction, an optimal allocation and management of heterogeneous network resources to achieve the required low latency, energy efficiency, high reliability, enhanced coverage and connectivity, etc. is a key challenge to be solved urgently. In this dissertation, we address four critical and challenging resource allocation problems in NGMN and propose efficient solutions to tackle them. In the first part, we address the network slice resource provisioning problem in NGMN for delivering a wide range of services promised by 5G systems and beyond, including enhanced mobile broadband (eMBB), ultra-reliable and low latency (URLLC), and massive machine-type communication (mMTC). Network slicing is one of the major solutions needed to meet the differentiated service requirements of NGMN, under one common network infrastructure. Towards robust mobile network slicing, we propose a novel approach for the end-to-end (E2E) resource allocation in a realistic scenario with uncertainty in slices' demands using stochastic programming. The effectiveness of our proposed methodology is validated through simulations. Despite the significant benefits that network slicing has demonstrated to bring to the management and performance of NGMN, the real-time response required by many emerging delay-sensitive applications, such as autonomous driving, remote health, and smart manufacturing, necessitates the integration of multi-access edge computing (MEC) into network sliding for 5G networks and beyond. To this end, we discuss a novel collaborative cloud-edge-local computation offloading scheme in the next two parts of this dissertation. The first part studies the problem from the perspective of the infrastructure provider and shows the effectiveness of the proposed approach in addressing the rising number of latency-sensitive services and improving energy efficiency which has become a primary concern in NGMN. Moreover, taking into account the perspective of application (higher layer), we propose a novel framework for the optimal reservation of resources by applications, resulting in significant resource savings and reduced cost. The proposed method utilizes application-specific resource coupling relationships modeled using linear regression analysis. We further improve this approach by using Reinforcement Learning to automatically derive resource coupling functions in dynamic environments. Enhanced connectivity and coverage are other key objectives of NGMN. In this regard, unmanned aerial vehicles (UAVs) have been extensively utilized to provide wireless connectivity in rural and under-developed areas, enhance network capacity, and provide support for peaks or unexpected surges in user demand. The popularity of UAVs in such scenarios is mainly owing to their fast deployment, cost-efficiency, and superior communication performance resulting from line-of-sight (LoS)-dominated wireless channels. In the fifth part of this dissertation, we formulate the problem of aerial platform resource allocation and traffic routing in multi-UAV relaying systems wherein UAVs are deployed as flying base stations. Our proposed solution is shown to improve the supported traffic with minimum deployment cost. Moreover, the new breed of intelligent devices and applications such as UAVs, AR/VR, remote health, autonomous vehicles, etc. requires a novel paradigm shift from traditional cloud-based learning to a distributed, low-latency, and reliable ML at the network edge. To this end, Federated Learning (FL) has been proposed as a new learning scheme that enables devices to collaboratively learn a shared model while keeping the training data locally. However, the performance of FL is significantly affected by various security threats such as data and model poisoning attacks. Towards reliable edge learning, in the last part of this dissertation, we propose trust as a metric to measure the trustworthiness of the FL agents and thereby enhance the reliability of FL
    corecore