Reinforcement Learning for Traffic-Adaptive Sleep Mode Management in 5G Networks

Abstract

The ever-increasing energy consumption of mobile networks has concerned operators about their growing network running costs. Among the different segments of mobile networks, base stations (BSs) have a major energy consumption share. To reduce BS consumption, BS components with similar (de)activation times can be grouped and put into sleep during their times of inactivity. The deeper and more energy-efficient a sleep mode (SM) is, the longer (de)activation time it takes to transition, which incurs a proportional service interruption. Therefore, it is challenging to timely decide on the best SM, bearing in mind the daily traffic fluctuation and imposed service level constraints on delay/dropping. In this study, we leverage an online reinforcement learning technique, i.e., SARSA, and propose an algorithm to opt SM given time and BS load. We use real mobile traffic obtained from a BS in Stockholm to evaluate the performance of the proposed algorithm. Simulation results show that considerable energy saving can be achieved at the cost of acceptable packet dropping level compared to two lower/upper baselines, namely, fixed (non-adaptive) SMs and optimal non-causal solution.QC 20200811AI5Gree

    Similar works

    Full text

    thumbnail-image