2,484 research outputs found

    Mobile traffic forecasting for maximizing 5G network slicing resource utilization

    Get PDF
    IEEE INFOCOM 2017 - IEEE Conference on Computer CommunicationsAbstract. The emerging network slicing paradigm for 5G provides new business opportunities by enabling multi-tenancy support. At the same time, new technical challenges are introduced, as novel resource allocation algorithms are required to accommodate different business models. In particular, infrastructure providers need to implement radically new admission control policies to decide on network slices requests depending on their Service Level Agreements (SLA). When implementing such admission control policies, infrastructure providers may apply forecasting techniques in order to adjust the allocated slice resources so as to optimize the network utilization while meeting network slices' SLAs. This paper focuses on the design of three key network slicing building blocks responsible for (i) traffic analysis and prediction per network slice, (ii) admission control decisions for network slice requests, and (iii) adaptive correction of the forecasted load based on measured deviations. Our results show very substantial potential gains in terms of system utilization as well as a trade-off between conservative forecasting configurations versus more aggressive ones (higher gains, SLA risk)This work has been partially funded by the European Union’s Horizon 2020 research and innovation programme under the grant agreement No. 671584 5GNORMA

    Explanation-Guided Deep Reinforcement Learning for Trustworthy 6G RAN Slicing

    Full text link
    The complexity of emerging sixth-generation (6G) wireless networks has sparked an upsurge in adopting artificial intelligence (AI) to underpin the challenges in network management and resource allocation under strict service level agreements (SLAs). It inaugurates the era of massive network slicing as a distributive technology where tenancy would be extended to the final consumer through pervading the digitalization of vertical immersive use-cases. Despite the promising performance of deep reinforcement learning (DRL) in network slicing, lack of transparency, interpretability, and opaque model concerns impedes users from trusting the DRL agent decisions or predictions. This problem becomes even more pronounced when there is a need to provision highly reliable and secure services. Leveraging eXplainable AI (XAI) in conjunction with an explanation-guided approach, we propose an eXplainable reinforcement learning (XRL) scheme to surmount the opaqueness of black-box DRL. The core concept behind the proposed method is the intrinsic interpretability of the reward hypothesis aiming to encourage DRL agents to learn the best actions for specific network slice states while coping with conflict-prone and complex relations of state-action pairs. To validate the proposed framework, we target a resource allocation optimization problem where multi-agent XRL strives to allocate optimal available radio resources to meet the SLA requirements of slices. Finally, we present numerical results to showcase the superiority of the adopted XRL approach over the DRL baseline. As far as we know, this is the first work that studies the feasibility of an explanation-guided DRL approach in the context of 6G networks.Comment: 6 Pages, 6 figure

    Elastic Multi-resource Network Slicing: Can Protection Lead to Improved Performance?

    Full text link
    In order to meet the performance/privacy requirements of future data-intensive mobile applications, e.g., self-driving cars, mobile data analytics, and AR/VR, service providers are expected to draw on shared storage/computation/connectivity resources at the network "edge". To be cost-effective, a key functional requirement for such infrastructure is enabling the sharing of heterogeneous resources amongst tenants/service providers supporting spatially varying and dynamic user demands. This paper proposes a resource allocation criterion, namely, Share Constrained Slicing (SCS), for slices allocated predefined shares of the network's resources, which extends the traditional alpha-fairness criterion, by striking a balance among inter- and intra-slice fairness vs. overall efficiency. We show that SCS has several desirable properties including slice-level protection, envyfreeness, and load driven elasticity. In practice, mobile users' dynamics could make the cost of implementing SCS high, so we discuss the feasibility of using a simpler (dynamically) weighted max-min as a surrogate resource allocation scheme. For a setting with stochastic loads and elastic user requirements, we establish a sufficient condition for the stability of the associated coupled network system. Finally, and perhaps surprisingly, we show via extensive simulations that while SCS (and/or the surrogate weighted max-min allocation) provides inter-slice protection, they can achieve improved job delay and/or perceived throughput, as compared to other weighted max-min based allocation schemes whose intra-slice weight allocation is not share-constrained, e.g., traditional max-min or discriminatory processor sharing

    A novel admission control scheme for network slicing based on squatting and kicking strategies

    Get PDF
    New services and applications impose differentquality of service (QoS) requirements on network slicing. Tomeet differentiated service requirements, current Internet servicemodel has to support emerging real-time applications from 5Gnetworks. The admission control mechanisms are expected tobe one of the key components of the future integrated serviceInternet model, for providing multi-level service guarantees withthe different classes (slices) of services. Therefore, this paperintroduces a new flexible admission control mechanism, basedon squatting and kicking techniques (SKM), which can beemployed under network slicing scenario. From the results, SKMprovides 100% total resource utilization in bandwidth contextand 100% acceptance ratio for highest priority class underdifferent input traffic volumes, which cannot be achieved byother existing schemes such as AllocTC-Sharing model due topriority constraints.Peer ReviewedPostprint (published version

    Deep Reinforcement Learning for Resource Management in Network Slicing

    Full text link
    Network slicing is born as an emerging business to operators, by allowing them to sell the customized slices to various tenants at different prices. In order to provide better-performing and cost-efficient services, network slicing involves challenging technical issues and urgently looks forward to intelligent innovations to make the resource management consistent with users' activities per slice. In that regard, deep reinforcement learning (DRL), which focuses on how to interact with the environment by trying alternative actions and reinforcing the tendency actions producing more rewarding consequences, is assumed to be a promising solution. In this paper, after briefly reviewing the fundamental concepts of DRL, we investigate the application of DRL in solving some typical resource management for network slicing scenarios, which include radio resource slicing and priority-based core network slicing, and demonstrate the advantage of DRL over several competing schemes through extensive simulations. Finally, we also discuss the possible challenges to apply DRL in network slicing from a general perspective.Comment: The manuscript has been accepted by IEEE Access in Nov. 201

    Performance Assessment of an ITU-T Compliant Machine Learning Enhancements for 5 G RAN Network Slicing

    Get PDF
    Network slicing is a technique introduced by 3GPP to enable multi-tenant operation in 5 G systems. However, the support of slicing at the air interface requires not only efficient optimization algorithms operating in real time but also its tight integration into the 5 G control plane. In this paper, we first present a priority-based mechanism enabling defined performance isolation among slices competing for resources. Then, to speed up the resource arbitration process, we propose and compare several supervised machine learning (ML) techniques. We show how to embed the proposed approach into the ITU-T standardized ML architecture. The proposed ML enhancement is evaluated under realistic traffic conditions with respect to the performance criteria defined by GSMA while explicitly accounting for 5 G millimeter wave channel conditions. Our results show that ML techniques are able to provide suitable approximations for the resource allocation process ensuring slice performance isolation, efficient resource use, and fairness. Among the considered algorithms, polynomial regressions show the best results outperforming the exact solution algorithm by 5–6 orders of magnitude in terms of execution time and both neural network and random forest algorithms in terms of accuracy (by 20–40 %), sensitiveness to workload variations and training sample size. Finally, ML algorithms are generally prone to service level agreements (SLA) violation under high load and time-varying channel conditions, implying that an SLA enforcement system is needed in ITU-T's 5 G ML framework.publishedVersionPeer reviewe
    • …
    corecore