19 research outputs found

    Enhancing user fairness in OFDMA radio access networks through machine learning

    Get PDF
    The problem of radio resource scheduling subject to fairness satisfaction is very challenging even in future radio access networks. Standard fairness criteria aim to find the best trade-off between overall throughput maximization and user fairness satisfaction under various types of network conditions. However, at the Radio Resource Management (RRM) level, the existing schedulers are rather static being unable to react according to the momentary networking conditions so that the user fairness measure is maximized all time. This paper proposes a dynamic scheduler framework able to parameterize the proportional fair scheduling rule at each Transmission Time Interval (TTI) to improve the user fairness. To deal with the framework complexity, the parameterization decisions are approximated by using the neural networks as non-linear functions. The actor-critic Reinforcement Learning (RL) algorithm is used to learn the best set of non-linear functions that approximate the best fairness parameters to be applied in each momentary state. Simulations results reveal that the proposed framework outperforms the existing fairness adaptation techniques as well as other types of RL-based schedulers

    A comparison of reinforcement learning algorithms in fairness-oriented OFDMA schedulers

    Get PDF
    Due to large-scale control problems in 5G access networks, the complexity of radioresource management is expected to increase significantly. Reinforcement learning is seen as apromising solution that can enable intelligent decision-making and reduce the complexity of differentoptimization problems for radio resource management. The packet scheduler is an importantentity of radio resource management that allocates users’ data packets in the frequency domainaccording to the implemented scheduling rule. In this context, by making use of reinforcementlearning, we could actually determine, in each state, the most suitable scheduling rule to be employedthat could improve the quality of service provisioning. In this paper, we propose a reinforcementlearning-based framework to solve scheduling problems with the main focus on meeting the userfairness requirements. This framework makes use of feed forward neural networks to map momentarystates to proper parameterization decisions for the proportional fair scheduler. The simulation resultsshow that our reinforcement learning framework outperforms the conventional adaptive schedulersoriented on fairness objective. Discussions are also raised to determine the best reinforcement learningalgorithm to be implemented in the proposed framework based on various scheduler settings

    Enhancing user fairness in OFDMA radio access networks through machine learning

    Get PDF
    The problem of radio resource scheduling subject to fairness satisfaction is very challenging even in future radio access networks. Standard fairness criteria aim to find the best trade-off between overall throughput maximization and user fairness satisfaction under various types of network conditions. However, at the Radio Resource Management (RRM) level, the existing schedulers are rather static being unable to react according to the momentary networking conditions so that the user fairness measure is maximized all time. This paper proposes a dynamic scheduler framework able to parameterize the proportional fair scheduling rule at each Transmission Time Interval (TTI) to improve the user fairness. To deal with the framework complexity, the parameterization decisions are approximated by using the neural networks as non-linear functions. The actor-critic Reinforcement Learning (RL) algorithm is used to learn the best set of non-linear functions that approximate the best fairness parameters to be applied in each momentary state. Simulations results reveal that the proposed framework outperforms the existing fairness adaptation techniques as well as other types of RL-based schedulers

    A comparison of reinforcement learning algorithms in fairness-oriented OFDMA schedulers

    Get PDF
    Due to large-scale control problems in 5G access networks, the complexity of radioresource management is expected to increase significantly. Reinforcement learning is seen as apromising solution that can enable intelligent decision-making and reduce the complexity of differentoptimization problems for radio resource management. The packet scheduler is an importantentity of radio resource management that allocates users’ data packets in the frequency domainaccording to the implemented scheduling rule. In this context, by making use of reinforcementlearning, we could actually determine, in each state, the most suitable scheduling rule to be employedthat could improve the quality of service provisioning. In this paper, we propose a reinforcementlearning-based framework to solve scheduling problems with the main focus on meeting the userfairness requirements. This framework makes use of feed forward neural networks to map momentarystates to proper parameterization decisions for the proportional fair scheduler. The simulation resultsshow that our reinforcement learning framework outperforms the conventional adaptive schedulersoriented on fairness objective. Discussions are also raised to determine the best reinforcement learningalgorithm to be implemented in the proposed framework based on various scheduler settings

    Sustainable scheduling policies for radio access networks based on LTE technology

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophyIn the LTE access networks, the Radio Resource Management (RRM) is one of the most important modules which is responsible for handling the overall management of radio resources. The packet scheduler is a particular sub-module which assigns the existing radio resources to each user in order to deliver the requested services in the most efficient manner. Data packets are scheduled dynamically at every Transmission Time Interval (TTI), a time window used to take the user’s requests and to respond them accordingly. The scheduling procedure is conducted by using scheduling rules which select different users to be scheduled at each TTI based on some priority metrics. Various scheduling rules exist and they behave differently by balancing the scheduler performance in the direction imposed by one of the following objectives: increasing the system throughput, maintaining the user fairness, respecting the Guaranteed Bit Rate (GBR), Head of Line (HoL) packet delay, packet loss rate and queue stability requirements. Most of the static scheduling rules follow the sequential multi-objective optimization in the sense that when the first targeted objective is satisfied, then other objectives can be prioritized. When the targeted scheduling objective(s) can be satisfied at each TTI, the LTE scheduler is considered to be optimal or feasible. So, the scheduling performance depends on the exploited rule being focused on particular objectives. This study aims to increase the percentage of feasible TTIs for a given downlink transmission by applying a mixture of scheduling rules instead of using one discipline adopted across the entire scheduling session. Two types of optimization problems are proposed in this sense: Dynamic Scheduling Rule based Sequential Multi-Objective Optimization (DSR-SMOO) when the applied scheduling rules address the same objective and Dynamic Scheduling Rule based Concurrent Multi-Objective Optimization (DSR-CMOO) if the pool of rules addresses different scheduling objectives. The best way of solving such complex optimization problems is to adapt and to refine scheduling policies which are able to call different rules at each TTI based on the best matching scheduler conditions (states). The idea is to develop a set of non-linear functions which maps the scheduler state at each TTI in optimal distribution probabilities of selecting the best scheduling rule. Due to the multi-dimensional and continuous characteristics of the scheduler state space, the scheduling functions should be approximated. Moreover, the function approximations are learned through the interaction with the RRM environment. The Reinforcement Learning (RL) algorithms are used in this sense in order to evaluate and to refine the scheduling policies for the considered DSR-SMOO/CMOO optimization problems. The neural networks are used to train the non-linear mapping functions based on the interaction among the intelligent controller, the LTE packet scheduler and the RRM environment. In order to enhance the convergence in the feasible state and to reduce the scheduler state space dimension, meta-heuristic approaches are used for the channel statement aggregation. Simulation results show that the proposed aggregation scheme is able to outperform other heuristic methods. When the aggregation scheme of the channel statements is exploited, the proposed DSR-SMOO/CMOO problems focusing on different objectives which are solved by using various RL approaches are able to: increase the mean percentage of feasible TTIs, minimize the number of TTIs when the RL approaches punish the actions taken TTI-by-TTI, and minimize the variation of the performance indicators when different simulations are launched in parallel. This way, the obtained scheduling policies being focused on the multi-objective criteria are sustainable. Keywords: LTE, packet scheduling, scheduling rules, multi-objective optimization, reinforcement learning, channel, aggregation, scheduling policies, sustainable

    360° mulsemedia experience over next generation wireless networks - a reinforcement learning approach

    Get PDF
    The next generation of wireless networks targets aspiring key performance indicators, like very low latency, higher data rates and more capacity, paving the way for new generations of video streaming technologies, such as 360° or omnidirectional videos. One possible application that could revolutionize the streaming technology is the 360° MULtiple SEnsorial MEDIA (MULSEMEDIA) which enriches the 360° video content with other media objects like olfactory, haptic or even thermoceptic ones. However, the adoption of the 360° Mulsemedia applications might be hindered by the strict Quality of Service (QoS) requirements, like very large bandwidth and low latency for fast responsiveness to the users, inputs that could impact their Quality of Experience (QoE). To this extent, this paper introduces the new concept of 360° Mulsemedia as well as it proposes the use of Reinforcement Learning to enable QoS provisioning over the next generation wireless networks that influences the QoE of the end-users

    360° mulsemedia experience over next generation wireless networks - a reinforcement learning approach

    Get PDF
    The next generation of wireless networks targets aspiring key performance indicators, like very low latency, higher data rates and more capacity, paving the way for new generations of video streaming technologies, such as 360° or omnidirectional videos. One possible application that could revolutionize the streaming technology is the 360° MULtiple SEnsorial MEDIA (MULSEMEDIA) which enriches the 360° video content with other media objects like olfactory, haptic or even thermoceptic ones. However, the adoption of the 360° Mulsemedia applications might be hindered by the strict Quality of Service (QoS) requirements, like very large bandwidth and low latency for fast responsiveness to the users, inputs that could impact their Quality of Experience (QoE). To this extent, this paper introduces the new concept of 360° Mulsemedia as well as it proposes the use of Reinforcement Learning to enable QoS provisioning over the next generation wireless networks that influences the QoE of the end-users

    5MART: A 5G SMART scheduling framework for optimizing QoS through reinforcement learning

    Get PDF
    The massive growth in mobile data traffic and the heterogeneity and stringency of Quality of Service (QoS) requirements of various applications have put significant pressure on the underlying network infrastructure and represent an important challenge even for the very anticipated 5G networks. In this context, the solution is to employ smart Radio Resource Management (RRM) in general and innovative packet scheduling in particular in order to offer high flexibility and cope with both current and upcoming QoS challenges. Given the increasing demand for bandwidth-hungry applications, conventional scheduling strategies face significant problems in meeting the heterogeneous QoS requirements of various application classes under dynamic network conditions. This paper proposes 5MART, a 5G smart scheduling framework that manages the QoS provisioning for heterogeneous traffic. Reinforcement learning and neural networks are jointly used to find the most suitable scheduling decisions based on current networking conditions. Simulation results show that the proposed 5MART framework can achieve up to 50% improvement in terms of time fraction (in sub-frames) when the heterogeneous QoS constraints are met with respect to other state-of-the-art scheduling solutions

    Towards 5G: A reinforcement learning-based scheduling solution for data traffic management

    Get PDF
    Dominated by delay-sensitive and massive data applications, radio resource management in 5G access networks is expected to satisfy very stringent delay and packet loss requirements. In this context, the packet scheduler plays a central role by allocating user data packets in the frequency domain at each predefined time interval. Standard scheduling rules are known limited in satisfying higher quality of service (QoS) demands when facing unpredictable network conditions and dynamic traffic circumstances. This paper proposes an innovative scheduling framework able to select different scheduling rules according to instantaneous scheduler states in order to minimize the packet delays and packet drop rates for strict QoS requirements applications. To deal with real-time scheduling, the reinforcement learning (RL) principles are used to map the scheduling rules to each state and to learn when to apply each. Additionally, neural networks are used as function approximation to cope with the RL complexity and very large representations of the scheduler state space. Simulation results demonstrate that the proposed framework outperforms the conventional scheduling strategies in terms of delay and packet drop rate requirements
    corecore