3,966 research outputs found

    GAN-powered Deep Distributional Reinforcement Learning for Resource Management in Network Slicing

    Full text link
    Network slicing is a key technology in 5G communications system. Its purpose is to dynamically and efficiently allocate resources for diversified services with distinct requirements over a common underlying physical infrastructure. Therein, demand-aware resource allocation is of significant importance to network slicing. In this paper, we consider a scenario that contains several slices in a radio access network with base stations that share the same physical resources (e.g., bandwidth or slots). We leverage deep reinforcement learning (DRL) to solve this problem by considering the varying service demands as the environment state and the allocated resources as the environment action. In order to reduce the effects of the annoying randomness and noise embedded in the received service level agreement (SLA) satisfaction ratio (SSR) and spectrum efficiency (SE), we primarily propose generative adversarial network-powered deep distributional Q network (GAN-DDQN) to learn the action-value distribution driven by minimizing the discrepancy between the estimated action-value distribution and the target action-value distribution. We put forward a reward-clipping mechanism to stabilize GAN-DDQN training against the effects of widely-spanning utility values. Moreover, we further develop Dueling GAN-DDQN, which uses a specially designed dueling generator, to learn the action-value distribution by estimating the state-value distribution and the action advantage function. Finally, we verify the performance of the proposed GAN-DDQN and Dueling GAN-DDQN algorithms through extensive simulations

    Slice-Aware Radio Resource Management for Future Mobile Networks

    Get PDF
    The concept of network slicing has been introduced in order to enable mobile networks to accommodate multiple heterogeneous use cases that are anticipated to be served within a single physical infrastructure. The slices are end-to-end virtual networks that share the resources of a physical network, spanning the core network (CN) and the radio access network (RAN). RAN slicing can be more challenging than CN slicing as the former deals with the distribution of radio resources, where the capacity is not constant over time and is hard to extend. The main challenge in RAN slicing is to simultaneously improve multiplexing gains while assuring enough isolation between slices, meaning one of the slices cannot negatively influence other slices. In this work, a flexible and configurable framework for RAN slicing is provided, where diverse requirements of slices are taken into account, and slice management algorithms adjust the control parameters of different radio resource management (RRM) mechanisms to satisfy the slices' service level agreements (SLAs). A new entity that translates the key performance indicator (KPI) targets of the SLAs to the control parameters is introduced and is called RAN slice orchestrator. Diverse algorithms governing this entity are introduced, which range from heuristics-based to model-free methods. Besides, a protection mechanism is constructed to prevent the negative influences of slices on each other's performances. The simulation-based analysis demonstrates the feasibility of slicing the RAN with multiplexing gains and slice isolation

    Network Flow Optimization Using Reinforcement Learning

    Get PDF

    AI gym for Networks

    Get PDF
    5G Networks are delivering better services and connecting more devices, but at the same time are becoming more complex. Problems like resource management and control optimization are increasingly dynamic and difficult to model making it very hard to use traditional model-based optimization techniques. Artificial Intelligence (AI) explores techniques such as Deep Reinforcement Learning (DRL), which uses the interaction between the agent and the environment to learn what action to take to obtain the best possible result. Researchers usually need to create and develop a simulation environment for their scenario of interest to be able to experiment with DRL algorithms. This takes a large amount of time from the research process, while the lack of a common environment makes it difficult to compare algorithms. The proposed solution aims to fill this gap by creating a tool that facilitates the setting up of DRL training environments for network scenarios. The developed tool uses three open source software, the Containernet to simulate the connections between devices, the Ryu Controller as the Software Defined Network Controller, and OpenAI Gym which is responsible for setting up the communication between the environment and the DRL agent. With the project developed during the thesis, the users will be capable of creating more scenarios in a short period, opening space to set up different environments, solving various problems as well as providing a common environment where other Agents can be compared. The developed software is used to compare the performance of several DRL agents in two different network control problems: routing and network slice admission control. A novel DRL based solution is used in the case of network slice admission control that jointly optimizes the admission and the placement of traffic of a network slice in the physical resources.As redes 5G oferecem melhores serviços e conectam mais dispositivos, fazendo com que se tornem mais complexas e difíceis de gerir. Problemas como a gestão de recursos e a otimização de controlo são cada vez mais dinâmicos e difíceis de modelar, o que torna difícil usar soluções de optimização basea- das em modelos tradicionais. A Inteligência Artificial (IA) explora técnicas como Deep Reinforcement Learning que utiliza a interação entre o agente e o ambiente para aprender qual a ação a ter para obter o melhor resultado possível. Normalmente, os investigadores precisam de criar e desenvolver um ambiente de simulação para poder estudar os algoritmos DRL e a sua interação com o cenário de interesse. A criação de ambientes a partir do zero retira tempo indispensável para a pesquisa em si, e a falta de ambientes de treino comuns torna difícil a comparação dos algoritmos. A solução proposta foca-se em preencher esta lacuna criando uma ferramenta que facilite a configuração de ambientes de treino DRL para cenários de rede. A ferramenta desenvolvida utiliza três softwares open source, o Containernet para simular as conexões entre os dispositivos, o Ryu Controller como Software Defined Network Controller e o OpenAI Gym que é responsável por configurar a comunicação entre o ambiente e o agente DRL. Através do projeto desenvolvido, os utilizadores serão capazes de criar mais cenários em um curto período, abrindo espaço para configurar diferentes ambientes e resolver diferentes problemas, bem como fornecer um ambiente comum onde diferentes Agentes podem ser comparados. O software desenvolvido foi usado para comparar o desempenho de vários agentes DRL em dois problemas diferentes de controlo de rede, nomeadamente, roteamento e controlo de admissão de slices na rede. Uma solução baseada em DRL é usada no caso do controlo de admissão de slices na rede que otimiza conjuntamente a admissão e a colocação de tráfego de uma slice na rede nos recursos físicos da mesma

    On the specialization of FDRL agents for scalable and distributed 6G RAN slicing orchestration

    Get PDF
    ©2022 IEEE. Reprinted, with permission, from Rezazadeh, F., Zanzi, L., Devoti, F. et.al. On the Specialization of FDRL Agents for Scalable and Distributed 6G RAN Slicing Orchestration. IEEE Transactions on vehicular technology (Online) October 2022Network slicing enables multiple virtual networks to be instantiated and customized to meet heterogeneous use case requirements over 5G and beyond network deployments. However, most of the solutions available today face scalability issues when considering many slices, due to centralized controllers requiring a holistic view of the resource availability and consumption over different networking domains. In order to tackle this challenge, we design a hierarchical architecture to manage network slices resources in a federated manner. Driven by the rapid evolution of deep reinforcement learning (DRL) schemes and the Open RAN (O-RAN) paradigm, we propose a set of traffic-aware local decision agents (DAs) dynamically placed in the radio access network (RAN). These federated decision entities tailor their resource allocation policy according to the long-term dynamics of the underlying traffic, defining specialized clusters that enable faster training and communication overhead reduction. Indeed, aided by a traffic-aware agent selection algorithm, our proposed Federated DRL approach provides higher resource efficiency than benchmark solutions by quickly reacting to end-user mobility patterns and reducing costly interactions with centralized controllersPeer ReviewedPreprin

    On the Specialization of FDRL Agents for Scalable and Distributed 6G RAN Slicing Orchestration

    Get PDF
    Network slicing enables multiple virtual networks to be instantiated and customized to meet heterogeneous use case requirements over 5G and beyond network deployments. However, most of the solutions available today face scalability issues when considering many slices, due to centralized controllers requiring a holistic view of the resource availability and consumption over different networking domains. In order to tackle this challenge, we design a hierarchical architecture to manage network slices resources in a federated manner. Driven by the rapid evolution of deep reinforcement learning (DRL) schemes and the Open RAN (O-RAN) paradigm, we propose a set of traffic-aware local decision agents (DAs) dynamically placed in the radio access network (RAN). These federated decision entities tailor their resource allocation policy according to the long-term dynamics of the underlying traffic, defining specialized clusters that enable faster training and communication overhead reduction. Indeed, aided by a traffic-aware agent selection algorithm, our proposed Federated DRL approach provides higher resource efficiency than benchmark solutions by quickly reacting to end-user mobility patterns and reducing costly interactions with centralized controllers.Comment: 15 pages, 15 Figures, accepted for publication at IEEE TV

    Learning from Peers: Deep Transfer Reinforcement Learning for Joint Radio and Cache Resource Allocation in 5G RAN Slicing

    Full text link
    Radio access network (RAN) slicing is an important pillar in cross-domain network slicing which covers RAN, edge, transport and core slicing. The evolving network architecture requires the orchestration of multiple network resources such as radio and cache resources. In recent years, machine learning (ML) techniques have been widely applied for network management. However, most existing works do not take advantage of the knowledge transfer capability in ML. In this paper, we propose a deep transfer reinforcement learning (DTRL) scheme for joint radio and cache resource allocation to serve 5G RAN slicing. We first define a hierarchical architecture for the joint resource allocation. Then we propose two DTRL algorithms: Q-value-based deep transfer reinforcement learning (QDTRL) and action selection-based deep transfer reinforcement learning (ADTRL). In the proposed schemes, learner agents utilize expert agents' knowledge to improve their performance on target tasks. The proposed algorithms are compared with both the model-free exploration bonus deep Q-learning (EB-DQN) and the model-based priority proportional fairness and time-to-live (PPF-TTL) algorithms. Compared with EB-DQN, our proposed DTRL based method presents 21.4% lower delay for Ultra Reliable Low Latency Communications (URLLC) slice and 22.4% higher throughput for enhanced Mobile Broad Band (eMBB) slice, while achieving significantly faster convergence than EB-DQN. Moreover, 40.8% lower URLLC delay and 59.8% higher eMBB throughput are observed with respect to PPF-TTL.Comment: Under review of IEEE Transactions on Cognitive Communications and Networkin
    corecore