7 research outputs found

    Joint Explainability and Sensitivity-Aware Federated Deep Learning for Transparent 6G RAN Slicing

    Full text link
    In recent years, wireless networks are evolving complex, which upsurges the use of zero-touch artificial intelligence (AI)-driven network automation within the telecommunication industry. In particular, network slicing, the most promising technology beyond 5G, would embrace AI models to manage the complex communication network. Besides, it is also essential to build the trustworthiness of the AI black boxes in actual deployment when AI makes complex resource management and anomaly detection. Inspired by closed-loop automation and Explainable Artificial intelligence (XAI), we design an Explainable Federated deep learning (FDL) model to predict per-slice RAN dropped traffic probability while jointly considering the sensitivity and explainability-aware metrics as constraints in such non-IID setup. In precise, we quantitatively validate the faithfulness of the explanations via the so-called attribution-based \emph{log-odds metric} that is included as a constraint in the run-time FL optimization task. Simulation results confirm its superiority over an unconstrained integrated-gradient (IG) \emph{post-hoc} FDL baseline.Comment: 6 Figure. arXiv admin note: substantial text overlap with arXiv:2307.09494, arXiv:2210.10147, arXiv:2307.1290

    The Network Slicing and Performance Analysis of 6G Networks using Machine Learning

    Get PDF
    6G technology is designed to provide users with faster and more reliable data  transfer as compared to the current 5G technology. 6G is rapidly evolving and provides a large bandwidth, even in underserved areas. This technology is extremely anticipated and is currently booming for its ability to deliver massive network capacity, low latency, and a highly improved user experience. Its scope is immense, and it’s designed to connect everyone and everything in the world. It includes new deployment models and services with extended user capacity. This study proposes a network slicing simulator that uses hardcoded base station coordinates to randomly distribute client locations to help analyse the performance of a particular base station architecture. When a client wants to locate the closest base station, it queries the simulator, which stores base station coordinates in a K-Dimensional tree. Throughout the simulation, the user follows a pattern that continues until the time limit is achieved. It gauges multiple statistics such as client connection ratio, client count per second, Client count per slice, latency, and the new location of the client. The K-D tree handover algorithm proposed here allows the user to connect to the nearest base stations after fulfilling the required criteria. This algorithm ensures the quality requirements and decides among the base stations the user connects to

    Predictive closed-loop service automation in O-RAN based network slicing

    Get PDF
    Network slicing provides introduces customized and agile network deployment for managing different service types for various verticals under the same infrastructure. To cater to the dynamic service requirements of these verticals and meet the required quality-of-service (QoS) mentioned in the service-level agreement (SLA), network slices need to be isolated through dedicated elements and resources. Additionally, allocated resources to these slices need to be continuously monitored and intelligently managed. This enables immediate detection and correction of any SLA violation to support automated service assurance in a closed-loop fashion. By reducing human intervention, intelligent and closed-loop resource management reduces the cost of offering flexible services. Resource management in a network shared among verticals (potentially administered by different providers), would be further facilitated through open and standardized interfaces. Open radio access network (O-RAN) is perhaps the most promising RAN architecture that inherits all the aforementioned features, namely intelligence, open and standard interfaces, and closed control loop. Inspired by this, in this article we provide closed loop and intelligent resource provisioning scheme for O-RAN slicing to prevent SLA violations. In order to maintain realism, a real-world dataset of a large operator is used to train a learning solution for optimizing resource utilization in the proposed closed-loop service automation process. Moreover, the deployment architecture and the corresponding flow that are cognizant of the O-RAN requirements are also discussed

    Cost-efficient Slicing in Virtual Radio Access Networks

    Get PDF
    Network slicing is a promising technique that has vastly increased the man- ifoldness of network services to be supported through isolated slices in a shared radio access network (RAN). Due to resource isolation, effective re- source allocation for coexisting multiple network slices is essential to maxi- mize network resource efficiency. However, the increased network flexibility and programmability offered by virtualized radio access networks (vRANs) come at the expense of a higher consumption of computing resources at the network edge. Additionally, the relationship between resource efficiency and computing cost minimization is still fuzzy. In this paper, we first perform extensive experiments using the vRAN testbed we developed and assess the vRAN resource consumption under different settings and a varying number of users. Then, leveraging our experimental findings, we formulate the prob- lem of cost-efficient network slice dimensioning, named cost-efficient slicing (CES), which maximizes the difference between total utility and CPU cost of network slices. Numerical results confirm that our solution leads to a cost-efficient resource slicing, while also accomplishing performance isolation and guaranteeing the target data rate and delay specified in the service level agreements

    Network slice reconfiguration by exploiting deep reinforcement learning with large action space

    Get PDF
    It is widely acknowledged that network slicing can tackle the diverse usage scenarios and connectivity services that the 5G-and-beyond system needs to support. To guarantee performance isolation while maximizing network resource utilization under dynamic traffic load, network slice needs to be reconfigured adaptively. However, it is commonly believed that the fine-grained resource reconfiguration problem is intractable due to the extremely high computational complexity caused by numerous variables. In this paper, we investigate the reconfiguration within a core network slice with aim of minimizing long-term resource consumption by exploiting Deep Reinforcement Learning (DRL). This problem is also intractable by using conventional Deep Q Network (DQN), as it has a multi-dimensional discrete action space which is difficult to explore efficiently. To address the curse of dimensionality, we propose a discrete Branching Dueling Q-network (discrete BDQ) by incorporating the action branching architecture into DQN, for drastically decreasing the number of estimated actions. Based on the discrete BDQ network, we develop an intelligent network slice reconfiguration algorithm (INSRA). Extensive simulation experiments are conducted to evaluate the performance of INSRA and the numerical results reveal that INSRA can minimize the long-term resource consumption and achieve high resource efficiency compared with several benchmark algorithms

    Resource Allocation in SDN/NFV-Enabled Core Networks

    Get PDF
    For next generation core networks, it is anticipated to integrate communication, storage and computing resources into one unified, programmable and flexible infrastructure. Software-defined networking (SDN) and network function virtualization (NFV) become two enablers. SDN decouples the network control and forwarding functions, which facilitates network management and enables network programmability. NFV allows the network functions to be virtualized and placed on high capacity servers located anywhere in the network, not only on dedicated devices in current networks. Driven by SDN and NFV platforms, the future network architecture is expected to feature centralized network management, virtualized function chaining, reduced capital and operational costs, and enhanced service quality. The combination of SDN and NFV provides a potential technical route to promote the future communication networks. It is imperative to efficiently manage, allocate and optimize the heterogeneous resources, including computing, storage, and communication resources, to the customized services to achieve better quality-of-service (QoS) provisioning. This thesis makes some in-depth researches on efficient resource allocation for SDN/NFV-enabled core networks in multiple aspects and dimensionality. Typically, the resource allocation task is implemented in three aspects. Given the traffic metrics, QoS requirements, and resource constraints of the substrate network, we first need to compose a virtual network function (VNF) chain to form a virtual network (VN) topology. Then, virtual resources allocated to each VNF or virtual link need to be optimized in order to minimize the provisioning cost while satisfying the QoS requirements. Next, we need to embed the virtual network (i.e., VNF chain) onto the substrate network, in which we need to assign the physical resources in an economical way to meet the resource demands of VNFs and links. This involves determining the locations of NFV nodes to host the VNFs and the routing from source to destination. Finally, we need to schedule the VNFs for multiple services to minimize the service completion time and maximize the network performance. In this thesis, we study resource allocation in SDN/NFV-enabled core networks from the aforementioned three aspects. First, we jointly study how to design the topology of a VN and embed the resultant VN onto a substrate network with the objective of minimizing the embedding cost while satisfying the QoS requirements. In VN topology design, optimizing the resource requirement for each virtual node and link is necessary. Without topology optimization, the resources assigned to the virtual network may be insufficient or redundant, leading to degraded service quality or increased embedding cost. The joint problem is formulated as a Mixed Integer Nonlinear Programming (MINLP), where queueing theory is utilized as the methodology to analyze the network delay and help to define the optimal set of physical resource requirements at network elements. Two algorithms are proposed to obtain the optimal/near-optimal solutions of the MINLP model. Second, we address the multi-SFC embedding problem by a game theoretical approach, considering the heterogeneity of NFV nodes, the effect of processing-resource sharing among various VNFs, and the capacity constraints of NFV nodes. In the proposed resource constrained multi-SFC embedding game (RC-MSEG), each SFC is treated as a player whose objective is to minimize the overall latency experienced by the supported service flow, while satisfying the capacity constraints of all its NFV nodes. Due to processing-resource sharing, additional delay is incurred and integrated into the overall latency for each SFC. The capacity constraints of NFV nodes are considered by adding a penalty term into the cost function of each player, and are guaranteed by a prioritized admission control mechanism. We first prove that the proposed game RC-MSEG is an exact potential game admitting at least one pure Nash Equilibrium (NE) and has the finite improvement property (FIP). Then, we design two iterative algorithms, namely, the best response (BR) algorithm with fast convergence and the spatial adaptive play (SAP) algorithm with great potential to obtain the best NE of the proposed game. Third, the VNF scheduling problem is investigated to minimize the makespan (i.e., overall completion time) of all services, while satisfying their different end-to-end (E2E) delay requirements. The problem is formulated as a mixed integer linear program (MILP) which is NP-hard with exponentially increasing computational complexity as the network size expands. To solve the MILP with high efficiency and accuracy, the original problem is reformulated as a Markov decision process (MDP) problem with variable action set. Then, a reinforcement learning (RL) algorithm is developed to learn the best scheduling policy by continuously interacting with the network environment. The proposed learning algorithm determines the variable action set at each decision-making state and accommodates different execution time of the actions. The reward function in the proposed algorithm is carefully designed to realize delay-aware VNF scheduling. To sum up, it is of great importance to integrate SDN and NFV in the same network to accelerate the evolution toward software-enabled network services. We have studied VN topology design, multi-VNF chain embedding, and delay-aware VNF scheduling to achieve efficient resource allocation in different dimensions. The proposed approaches pave the way for exploiting network slicing to improve resource utilization and facilitate QoS-guaranteed service provisioning in SDN/NFV-enabled networks
    corecore