25 research outputs found

    A Novel HWRR-SJF Scheduling Algorithm for Optimal Performance Improvement in LTE System

    Get PDF
    In currently, the revolution in a high-speed broadband network is the requirement and also endless demand for high data rate and mobility. To achieve above requirement, the 3rd Generation Partnership Project (3GPP) has been established the Long Time Evolution (LTE). LTE has established an improved LTE radio interface named LTE-Advanced (LTE-A) and it is a promising technology for providing broadband, mobile Internet access. But, better Quality of Service (QoS) to provide for customers is the main issue in LTE-A. To reduce the above issue, the packets should be utilized by using one of the most significant function of packet scheduling to upgrading system performance via determines the throughput performance. In existing scheme, the user with poor Channel Quality Indicator (CQI) has smaller throughput issue is not focused. In this paper, a Hybrid Weighted Round Robin with Shortest Job First (HWRR-SJF) Scheduling technique is proposed to enhance efficient throughput and fairness in LTE system for stationary and mobile users. In this proposed scheduling, to schedule users according to a different criterion like fairness and CQI. HWRR-SJF Scheduling has been proposed for scheduling of the users and it produces increased throughput for various SNR values simulated alongside Pedestrian and Vehicular moving models. The proposed method also uses a 4G-LTE filter or Digital Dividend (DD) in order to align the incoming signal. The digital dividend is used to remove white spaces, which refer to frequencies assigned to a broadcasting service but not used locally. The proposed model is very effective for users in terms of the performance metrics like packet loss, throughput, packet delay, spectral efficiency, fairness and it has been verified through MATLAB simulations

    Resource and power management in next generation networks

    Get PDF
    The limits of today’s cellular communication systems are constantly being tested by the exponential increase in mobile data traffic, a trend which is poised to continue well into the next decade. Densification of cellular networks, by overlaying smaller cells, i.e., micro, pico and femtocells, over the traditional macrocell, is seen as an inevitable step in enabling future networks to support the expected increases in data rate demand. Next generation networks will most certainly be more heterogeneous as services will be offered via various types of points of access (PoAs). Indeed, besides the traditional macro base station, it is expected that users will also be able to access the network through a wide range of other PoAs: WiFi access points, remote radio-heads (RRHs), small cell (i.e., micro, pico and femto) base stations or even other users, when device-to-device (D2D) communications are supported, creating thus a multi-tiered network architecture. This approach is expected to enhance the capacity of current cellular networks, while patching up potential coverage gaps. However, since available radio resources will be fully shared, the inter-cell interference as well as the interference between the different tiers will pose a significant challenge. To avoid severe degradation of network performance, properly managing the interference is essential. In particular, techniques that mitigate interference such Inter Cell Interference Coordination (ICIC) and enhanced ICIC (eICIC) have been proposed in the literature to address the issue. In this thesis, we argue that interference may be also addressed during radio resource scheduling tasks, by enabling the network to make interference-aware resource allocation decisions. Carrier aggregation technology, which allows the simultaneous use of several component carriers, on the other hand, targets the lack of sufficiently large portions of frequency spectrum; a problem that severely limits the capacity of wireless networks. The aggregated carriers may, in general, belong to different frequency bands, and have different bandwidths, thus they also may have very different signal propagation characteristics. Integration of carrier aggregation in the network introduces additional tasks and further complicates interference management, but also opens up a range of possibilities for improving spectrum efficiency in addition to enhancing capacity, which we aim to exploit. In this thesis, we first look at the resource allocation in problem in dense multitiered networks with support for advanced features such as carrier aggregation and device-to-device communications. For two-tiered networks with D2D support, we propose a centralised, near optimal algorithm, based on dynamic programming principles, that allows a central scheduler to make interference and traffic-aware scheduling decisions, while taking into consideration the short-lived nature of D2D links. As the complexity of the central scheduler increases exponentially with the number of component carriers, we further propose a distributed heuristic algorithm to tackle the resource allocation problem in carrier aggregation enabled dense networks. We show that the solutions we propose perform significantly better than standard solutions adopted in cellular networks such as eICIC coupled with Proportional Fair scheduling, in several key metrics such as user throughput, timely delivery of content and spectrum and energy efficiency, while ensuring fairness for backward compatible devices. Next, we investigate the potentiality to enhance network performance by enabling the different nodes of the network to reduce and dynamically adjust the transmit power of the different carriers to mitigate interference. Considering that the different carriers may have different coverage areas, we propose to leverage this diversity, to obtain high-performing network configurations. Thus, we model the problem of carrier downlink transmit power setting, as a competitive game between teams of PoAs, which enables us to derive distributed dynamic power setting algorithms. Using these algorithms we reach stable configurations in the network, known as Nash equilibria, which we show perform significantly better than fixed power strategies coupled with eICIC

    Adaptive scheduling in cellular access, wireless mesh and IP networks

    Get PDF
    Networking scenarios in the future will be complex and will include fixed networks and hybrid Fourth Generation (4G) networks, consisting of both infrastructure-based and infrastructureless, wireless parts. In such scenarios, adaptive provisioning and management of network resources becomes of critical importance. Adaptive mechanisms are desirable since they enable a self-configurable network that is able to adjust itself to varying traffic and channel conditions. The operation of adaptive mechanisms is heavily based on measurements. The aim of this thesis is to investigate how measurement based, adaptive packet scheduling algorithms can be utilized in different networking environments. The first part of this thesis is a proposal for a new delay-based scheduling algorithm, known as Delay-Bounded Hybrid Proportional Delay (DBHPD), for delay adaptive provisioning in DiffServ-based fixed IP networks. This DBHPD algorithm is thoroughly evaluated by ns2-simulations and measurements in a FreeBSD prototype router network. It is shown that DBHPD results in considerably more controllable differentiation than basic static bandwidth sharing algorithms. The prototype router measurements also prove that a DBHPD algorithm can be easily implemented in practice, causing less processing overheads than a well known CBQ algorithm. The second part of this thesis discusses specific scheduling requirements set by hybrid 4G networking scenarios. Firstly, methods for joint scheduling and transmit beamforming in 3.9G or 4G networks are described and quantitatively analyzed using statistical methods. The analysis reveals that the combined gain of channel-adaptive scheduling and transmit beamforming is substantial and that an On-off strategy can achieve the performance of an ideal Max SNR strategy if the feedback threshold is optimized. Finally, a novel cross-layer energy-adaptive scheduling and queue management framework EAED (Energy Aware Early Detection), for preserving delay bounds and minimizing energy consumption in WLAN mesh networks, is proposed and evaluated with simulations. The simulations show that our scheme can save considerable amounts of transmission energy without violating application level QoS requirements when traffic load and distances are reasonable

    Sustainable scheduling policies for radio access networks based on LTE technology

    Get PDF
    A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophyIn the LTE access networks, the Radio Resource Management (RRM) is one of the most important modules which is responsible for handling the overall management of radio resources. The packet scheduler is a particular sub-module which assigns the existing radio resources to each user in order to deliver the requested services in the most efficient manner. Data packets are scheduled dynamically at every Transmission Time Interval (TTI), a time window used to take the user’s requests and to respond them accordingly. The scheduling procedure is conducted by using scheduling rules which select different users to be scheduled at each TTI based on some priority metrics. Various scheduling rules exist and they behave differently by balancing the scheduler performance in the direction imposed by one of the following objectives: increasing the system throughput, maintaining the user fairness, respecting the Guaranteed Bit Rate (GBR), Head of Line (HoL) packet delay, packet loss rate and queue stability requirements. Most of the static scheduling rules follow the sequential multi-objective optimization in the sense that when the first targeted objective is satisfied, then other objectives can be prioritized. When the targeted scheduling objective(s) can be satisfied at each TTI, the LTE scheduler is considered to be optimal or feasible. So, the scheduling performance depends on the exploited rule being focused on particular objectives. This study aims to increase the percentage of feasible TTIs for a given downlink transmission by applying a mixture of scheduling rules instead of using one discipline adopted across the entire scheduling session. Two types of optimization problems are proposed in this sense: Dynamic Scheduling Rule based Sequential Multi-Objective Optimization (DSR-SMOO) when the applied scheduling rules address the same objective and Dynamic Scheduling Rule based Concurrent Multi-Objective Optimization (DSR-CMOO) if the pool of rules addresses different scheduling objectives. The best way of solving such complex optimization problems is to adapt and to refine scheduling policies which are able to call different rules at each TTI based on the best matching scheduler conditions (states). The idea is to develop a set of non-linear functions which maps the scheduler state at each TTI in optimal distribution probabilities of selecting the best scheduling rule. Due to the multi-dimensional and continuous characteristics of the scheduler state space, the scheduling functions should be approximated. Moreover, the function approximations are learned through the interaction with the RRM environment. The Reinforcement Learning (RL) algorithms are used in this sense in order to evaluate and to refine the scheduling policies for the considered DSR-SMOO/CMOO optimization problems. The neural networks are used to train the non-linear mapping functions based on the interaction among the intelligent controller, the LTE packet scheduler and the RRM environment. In order to enhance the convergence in the feasible state and to reduce the scheduler state space dimension, meta-heuristic approaches are used for the channel statement aggregation. Simulation results show that the proposed aggregation scheme is able to outperform other heuristic methods. When the aggregation scheme of the channel statements is exploited, the proposed DSR-SMOO/CMOO problems focusing on different objectives which are solved by using various RL approaches are able to: increase the mean percentage of feasible TTIs, minimize the number of TTIs when the RL approaches punish the actions taken TTI-by-TTI, and minimize the variation of the performance indicators when different simulations are launched in parallel. This way, the obtained scheduling policies being focused on the multi-objective criteria are sustainable. Keywords: LTE, packet scheduling, scheduling rules, multi-objective optimization, reinforcement learning, channel, aggregation, scheduling policies, sustainable

    A Comprehensive Survey of the Tactile Internet: State of the art and Research Directions

    Get PDF
    The Internet has made several giant leaps over the years, from a fixed to a mobile Internet, then to the Internet of Things, and now to a Tactile Internet. The Tactile Internet goes far beyond data, audio and video delivery over fixed and mobile networks, and even beyond allowing communication and collaboration among things. It is expected to enable haptic communication and allow skill set delivery over networks. Some examples of potential applications are tele-surgery, vehicle fleets, augmented reality and industrial process automation. Several papers already cover many of the Tactile Internet-related concepts and technologies, such as haptic codecs, applications, and supporting technologies. However, none of them offers a comprehensive survey of the Tactile Internet, including its architectures and algorithms. Furthermore, none of them provides a systematic and critical review of the existing solutions. To address these lacunae, we provide a comprehensive survey of the architectures and algorithms proposed to date for the Tactile Internet. In addition, we critically review them using a well-defined set of requirements and discuss some of the lessons learned as well as the most promising research directions

    Radio Resource Management for Uplink Grant-Free Ultra-Reliable Low-Latency Communications

    Get PDF

    Quality of service differentiation for multimedia delivery in wireless LANs

    Get PDF
    Delivering multimedia content to heterogeneous devices over a variable networking environment while maintaining high quality levels involves many technical challenges. The research reported in this thesis presents a solution for Quality of Service (QoS)-based service differentiation when delivering multimedia content over the wireless LANs. This thesis has three major contributions outlined below: 1. A Model-based Bandwidth Estimation algorithm (MBE), which estimates the available bandwidth based on novel TCP and UDP throughput models over IEEE 802.11 WLANs. MBE has been modelled, implemented, and tested through simulations and real life testing. In comparison with other bandwidth estimation techniques, MBE shows better performance in terms of error rate, overhead, and loss. 2. An intelligent Prioritized Adaptive Scheme (iPAS), which provides QoS service differentiation for multimedia delivery in wireless networks. iPAS assigns dynamic priorities to various streams and determines their bandwidth share by employing a probabilistic approach-which makes use of stereotypes. The total bandwidth to be allocated is estimated using MBE. The priority level of individual stream is variable and dependent on stream-related characteristics and delivery QoS parameters. iPAS can be deployed seamlessly over the original IEEE 802.11 protocols and can be included in the IEEE 802.21 framework in order to optimize the control signal communication. iPAS has been modelled, implemented, and evaluated via simulations. The results demonstrate that iPAS achieves better performance than the equal channel access mechanism over IEEE 802.11 DCF and a service differentiation scheme on top of IEEE 802.11e EDCA, in terms of fairness, throughput, delay, loss, and estimated PSNR. Additionally, both objective and subjective video quality assessment have been performed using a prototype system. 3. A QoS-based Downlink/Uplink Fairness Scheme, which uses the stereotypes-based structure to balance the QoS parameters (i.e. throughput, delay, and loss) between downlink and uplink VoIP traffic. The proposed scheme has been modelled and tested through simulations. The results show that, in comparison with other downlink/uplink fairness-oriented solutions, the proposed scheme performs better in terms of VoIP capacity and fairness level between downlink and uplink traffic
    corecore