640 research outputs found

    An intelligent fuzzy logic-based content and channel aware downlink scheduler for scalable video over OFDMA wireless systems

    Get PDF
    The recent advancements of wireless technology and applications make downlink scheduling and resource allocations an important research topic. In this paper, we consider the problem of downlink scheduling for multi-user scalable video streaming over OFDMA channels. The video streams are precoded using a scalable video coding (SVC) scheme. We propose a fuzzy logic-based scheduling algorithm, which prioritises the transmission to different users by considering video content, and channel conditions. Furthermore, a novel analytical model and a new performance metric have been developed for the performance analysis of the proposed scheduling algorithm. The obtained results show that the proposed algorithm outperforms the content-blind/channel aware scheduling algorithms with a gain of as much as 19% in terms of the number of supported users. The proposed algorithm allows for a fairer allocation of resources among users across the entire sector coverage, allowing for the enhancement of video quality at edges of the cell while minimising the degradation of users closer to the base station

    Quality of service differentiation for multimedia delivery in wireless LANs

    Get PDF
    Delivering multimedia content to heterogeneous devices over a variable networking environment while maintaining high quality levels involves many technical challenges. The research reported in this thesis presents a solution for Quality of Service (QoS)-based service differentiation when delivering multimedia content over the wireless LANs. This thesis has three major contributions outlined below: 1. A Model-based Bandwidth Estimation algorithm (MBE), which estimates the available bandwidth based on novel TCP and UDP throughput models over IEEE 802.11 WLANs. MBE has been modelled, implemented, and tested through simulations and real life testing. In comparison with other bandwidth estimation techniques, MBE shows better performance in terms of error rate, overhead, and loss. 2. An intelligent Prioritized Adaptive Scheme (iPAS), which provides QoS service differentiation for multimedia delivery in wireless networks. iPAS assigns dynamic priorities to various streams and determines their bandwidth share by employing a probabilistic approach-which makes use of stereotypes. The total bandwidth to be allocated is estimated using MBE. The priority level of individual stream is variable and dependent on stream-related characteristics and delivery QoS parameters. iPAS can be deployed seamlessly over the original IEEE 802.11 protocols and can be included in the IEEE 802.21 framework in order to optimize the control signal communication. iPAS has been modelled, implemented, and evaluated via simulations. The results demonstrate that iPAS achieves better performance than the equal channel access mechanism over IEEE 802.11 DCF and a service differentiation scheme on top of IEEE 802.11e EDCA, in terms of fairness, throughput, delay, loss, and estimated PSNR. Additionally, both objective and subjective video quality assessment have been performed using a prototype system. 3. A QoS-based Downlink/Uplink Fairness Scheme, which uses the stereotypes-based structure to balance the QoS parameters (i.e. throughput, delay, and loss) between downlink and uplink VoIP traffic. The proposed scheme has been modelled and tested through simulations. The results show that, in comparison with other downlink/uplink fairness-oriented solutions, the proposed scheme performs better in terms of VoIP capacity and fairness level between downlink and uplink traffic

    Towards Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions

    Get PDF
    The ever-increasing number of resource-constrained Machine-Type Communication (MTC) devices is leading to the critical challenge of fulfilling diverse communication requirements in dynamic and ultra-dense wireless environments. Among different application scenarios that the upcoming 5G and beyond cellular networks are expected to support, such as eMBB, mMTC and URLLC, mMTC brings the unique technical challenge of supporting a huge number of MTC devices, which is the main focus of this paper. The related challenges include QoS provisioning, handling highly dynamic and sporadic MTC traffic, huge signalling overhead and Radio Access Network (RAN) congestion. In this regard, this paper aims to identify and analyze the involved technical issues, to review recent advances, to highlight potential solutions and to propose new research directions. First, starting with an overview of mMTC features and QoS provisioning issues, we present the key enablers for mMTC in cellular networks. Along with the highlights on the inefficiency of the legacy Random Access (RA) procedure in the mMTC scenario, we then present the key features and channel access mechanisms in the emerging cellular IoT standards, namely, LTE-M and NB-IoT. Subsequently, we present a framework for the performance analysis of transmission scheduling with the QoS support along with the issues involved in short data packet transmission. Next, we provide a detailed overview of the existing and emerging solutions towards addressing RAN congestion problem, and then identify potential advantages, challenges and use cases for the applications of emerging Machine Learning (ML) techniques in ultra-dense cellular networks. Out of several ML techniques, we focus on the application of low-complexity Q-learning approach in the mMTC scenarios. Finally, we discuss some open research challenges and promising future research directions.Comment: 37 pages, 8 figures, 7 tables, submitted for a possible future publication in IEEE Communications Surveys and Tutorial

    The Performance Evaluation of Adaptive Guard Channel Scheme in Wireless Network

    Get PDF
    Dynamic Guard Channels (DCG) reduces the dropping and blocking rates in a network. However, most of the existing DGC allocations are not quite efficient because there were consideration for only the Handoff (HO) calls while the New calls (NC) were not considered; this leads to poor Quality of Service (QoS) for NC. Although it is better to give priority to HO calls over NC since the breaking of the connection of an established communicationis more annoying than blocking a NC. Thus, there is need to provide an alternative approach that guarantees an acceptable QoS in terms of both the HC and the NC. This paper presents the performance evaluation of an adaptive guard channel allocation; the scheme made use of two different models (1) guard channel with fuzzy logic (2) guard channel without fuzzy logic. Priority is given to handoff call due to the scarcity of radio spectrum. When all the guard channels have been allocated and the arrival rate of handoff calls keeps on increasing, new set of threshold values would be estimated by fuzzy logic model. Performance metrics are; Call Blocking Rate (CBR), Call Dropping Rate (CDR) and Throughput. Results showed that guard channel with fuzzy logic has the CBR values range from 24.02% to 69.015 and CDR values range from 12.025 to 18.90% while guard channel without fuzzy logic has CBR values range from 28.22% to 75.65% and CDR values range from 19.06% to 36.50%. The scheme proved to be more efficient in congestion control in wireless network

    A Taxonomy for Management and Optimization of Multiple Resources in Edge Computing

    Full text link
    Edge computing is promoted to meet increasing performance needs of data-driven services using computational and storage resources close to the end devices, at the edge of the current network. To achieve higher performance in this new paradigm one has to consider how to combine the efficiency of resource usage at all three layers of architecture: end devices, edge devices, and the cloud. While cloud capacity is elastically extendable, end devices and edge devices are to various degrees resource-constrained. Hence, an efficient resource management is essential to make edge computing a reality. In this work, we first present terminology and architectures to characterize current works within the field of edge computing. Then, we review a wide range of recent articles and categorize relevant aspects in terms of 4 perspectives: resource type, resource management objective, resource location, and resource use. This taxonomy and the ensuing analysis is used to identify some gaps in the existing research. Among several research gaps, we found that research is less prevalent on data, storage, and energy as a resource, and less extensive towards the estimation, discovery and sharing objectives. As for resource types, the most well-studied resources are computation and communication resources. Our analysis shows that resource management at the edge requires a deeper understanding of how methods applied at different levels and geared towards different resource types interact. Specifically, the impact of mobility and collaboration schemes requiring incentives are expected to be different in edge architectures compared to the classic cloud solutions. Finally, we find that fewer works are dedicated to the study of non-functional properties or to quantifying the footprint of resource management techniques, including edge-specific means of migrating data and services.Comment: Accepted in the Special Issue Mobile Edge Computing of the Wireless Communications and Mobile Computing journa

    Optimization of Mobility Parameters using Fuzzy Logic and Reinforcement Learning in Self-Organizing Networks

    Get PDF
    In this thesis, several optimization techniques for next-generation wireless networks are proposed to solve different problems in the field of Self-Organizing Networks and heterogeneous networks. The common basis of these problems is that network parameters are automatically tuned to deal with the specific problem. As the set of network parameters is extremely large, this work mainly focuses on parameters involved in mobility management. In addition, the proposed self-tuning schemes are based on Fuzzy Logic Controllers (FLC), whose potential lies in the capability to express the knowledge in a similar way to the human perception and reasoning. In addition, in those cases in which a mathematical approach has been required to optimize the behavior of the FLC, the selected solution has been Reinforcement Learning, since this methodology is especially appropriate for learning from interaction, which becomes essential in complex systems such as wireless networks. Taking this into account, firstly, a new Mobility Load Balancing (MLB) scheme is proposed to solve persistent congestion problems in next-generation wireless networks, in particular, due to an uneven spatial traffic distribution, which typically leads to an inefficient usage of resources. A key feature of the proposed algorithm is that not only the parameters are optimized, but also the parameter tuning strategy. Secondly, a novel MLB algorithm for enterprise femtocells scenarios is proposed. Such scenarios are characterized by the lack of a thorough deployment of these low-cost nodes, meaning that a more efficient use of radio resources can be achieved by applying effective MLB schemes. As in the previous problem, the optimization of the self-tuning process is also studied in this case. Thirdly, a new self-tuning algorithm for Mobility Robustness Optimization (MRO) is proposed. This study includes the impact of context factors such as the system load and user speed, as well as a proposal for coordination between the designed MLB and MRO functions. Fourthly, a novel self-tuning algorithm for Traffic Steering (TS) in heterogeneous networks is proposed. The main features of the proposed algorithm are the flexibility to support different operator policies and the adaptation capability to network variations. Finally, with the aim of validating the proposed techniques, a dynamic system-level simulator for Long-Term Evolution (LTE) networks has been designed
    corecore