218 research outputs found
Reinforcement Learning-based Access Schemes in Cognitive Radio Networks
In this thesis, we propose different MAC protocols based on three Reinforcement Learning (RL) approaches, namely Q-Learning, Deep Q-Network (DQN), and Deep Deterministic Policy Gradient (DDPG). We exploit the primary user (PU) feedback, in the form of ARQ and CQI bits, to enhance the performance of the secondary user (SU) MAC protocols. Exploiting the PU feedback information can be applied on the top of any SU sensing-based MAC protocol. Our proposed model relies on two main pillars, namely, an infinite-state Partially Observable Markov Decision Process (POMDP) to model the system dynamics besides a queuing-theoretic model for the PU queue; the states represent whether a packet is delivered or not from the PU’s queue and the PU channel state. The proposed RL access schemes are meant to design the best SU’s access probabilities in the absence of prior knowledge of the environment, by exploring and exploiting discrete and continuous action spaces, based on the last observed PU’s feedback. The performance of the proposed schemes show better results compared to conventional methods under more realistic assumptions, which is one major advantage of our proposed MAC protocols
Recommended from our members
Self-organising network management for heterogeneous LTE-advanced networks
This thesis was submitted for the award of Doctor of Philosophy and awarded by Brunel University LondonSince 2004, when the Long Term Evolution (LTE) was first proposed to be publicly available in the year 2009, a plethora of new characteristics, techniques and applications have been constantly enhancing it since its first release, over the past decade. As a result, the research aims for LTE-Advanced (LTE-A) have been released to create a ubiquitous and supportive network for mobile users. The incorporation of heterogeneous networks (HetNets) has been proposed as one of the main enhancements of LTE-A systems over the existing LTE releases, by proposing the deployment of small-cell applications, such as femtocells, to provide more coverage and quality of service (QoS) within the network, whilst also reducing capital expenditure. These principal advantages can be obtained at the cost of new challenges such as inter-cell interference, which occurs when different network applications share the same frequency channel in the network. In this thesis, the main challenges of HetNets in LTE-A platform have been addressed and novel solutions are proposed by using self-organising network (SON) management approaches, which allows the cooperative cellular systems to observe, decide and amend their ongoing operation based on network conditions. The novel SON algorithms are modelled and simulated in OPNET modeler simulation software for the three processes of resource allocation, mobility management and interference coordination in multi-tier macro-femto networks. Different channel allocation methods based on cooperative transmission, frequency reuse and dynamic spectrum access are investigated and a novel SON sub-channel allocation method is proposed based on hybrid fractional frequency reuse (HFFR) scheme to provide dynamic resource allocation between macrocells and femtocells, while avoiding co-tier and cross-tier interference. Mobility management is also addressed as another important issue in HetNets, especially in hand-ins from macrocell to femtocell base stations. The existing research considers a limited number of methods for handover optimisation, such as signal strength and call admission control (CAC) to avoid unnecessary handovers, while our novel SON handover management method implements a comprehensive algorithm that performs sensing process, as well as resource availability and user residence checks to initiate the handover process at the optimal time. In addition to this, the novel femto over macro priority (FoMP) check in this process also gives the femtocell target nodes priority over the congested macrocells in order to improve the QoS at both the network tiers. Inter-cell interference, as the key challenge of HetNets, is also investigated by research on the existing time-domain, frequency-domain and power control methods. A novel SON interference mitigation algorithm is proposed, which is based on enhanced inter-cell interference coordination (eICIC) with power control process. The 3-phase power control algorithm contains signal to interference plus noise ratio (SINR) measurements, channel quality indicator (CQI) mapping and transmission power amendments to avoid the occurrence of interference due to the effects of high transmission power. The results of this research confirm that if heterogeneous systems are backed-up with SON management strategies, not only can improve the network capacity and QoS, but also the new network challenges such as inter-cell interference can also be mitigated in new releases of LTE-A network
Unmanned Aerial Vehicle (UAV)-Enabled Wireless Communications and Networking
The emerging massive density of human-held and machine-type nodes implies larger traffic deviatiolns in the future than we are facing today. In the future, the network will be characterized by a high degree of flexibility, allowing it to adapt smoothly, autonomously, and efficiently to the quickly changing traffic demands both in time and space. This flexibility cannot be achieved when the network’s infrastructure remains static. To this end, the topic of UAVs (unmanned aerial vehicles) have enabled wireless communications, and networking has received increased attention. As mentioned above, the network must serve a massive density of nodes that can be either human-held (user devices) or machine-type nodes (sensors). If we wish to properly serve these nodes and optimize their data, a proper wireless connection is fundamental. This can be achieved by using UAV-enabled communication and networks. This Special Issue addresses the many existing issues that still exist to allow UAV-enabled wireless communications and networking to be properly rolled out
Machine Learning-Enabled Resource Allocation for Underlay Cognitive Radio Networks
Due to the rapid growth of new wireless communication services and applications, much attention has been directed to frequency spectrum resources and the way they are regulated. Considering that the radio spectrum is a natural limited resource, supporting the ever increasing demands for higher capacity and higher data rates for diverse sets of users, services and applications is a challenging task which requires innovative technologies capable of providing new ways of efficiently exploiting the available radio spectrum. Consequently, dynamic spectrum access (DSA) has been proposed as a replacement for static spectrum allocation policies. The DSA is implemented in three modes including interweave, overlay and underlay mode [1].
The key enabling technology for DSA is cognitive radio (CR), which is among the core prominent technologies for the next generation of wireless communication systems. Unlike conventional radio which is restricted to only operate in designated spectrum bands, a CR has the capability to operate in different spectrum bands owing to its ability in sensing, understanding its wireless environment, learning from past experiences and proactively changing the transmission parameters as needed. These features for CR are provided by an intelligent software package called the cognitive engine (CE). In general, the CE manages radio resources to accomplish cognitive functionalities and allocates and adapts the radio resources to optimize the performance of the network. Cognitive functionality of the CE can be achieved by leveraging machine learning techniques. Therefore, this thesis explores the application of two machine learning techniques in enabling the cognition capability of CE. The two considered machine learning techniques are neural network-based supervised learning and reinforcement learning. Specifically, this thesis develops resource allocation algorithms that leverage the use of machine learning techniques to find the solution to the resource allocation problem for heterogeneous underlay cognitive radio networks (CRNs). The proposed algorithms are evaluated under extensive simulation runs.
The first resource allocation algorithm uses a neural network-based learning paradigm to present a fully autonomous and distributed underlay DSA scheme where each CR operates based on predicting its transmission effect on a primary network (PN). The scheme is based on a CE with an artificial neural network that predicts the adaptive modulation and coding configuration for the primary link nearest to a transmitting CR, without exchanging information between primary and secondary networks. By managing the effect of the secondary network (SN) on the primary network, the presented technique maintains the relative average throughput change in the primary network within a prescribed maximum value, while also finding transmit settings for the CRs that result in throughput as large as allowed by the primary network interference limit.
The second resource allocation algorithm uses reinforcement learning and aims at distributively maximizing the average quality of experience (QoE) across transmission of CRs with different types of traffic while satisfying a primary network interference constraint. To best satisfy the QoE requirements of the delay-sensitive type of traffics, a cross-layer resource allocation algorithm is derived and its performance is compared against a physical-layer algorithm in terms of meeting end-to-end traffic delay constraints. Moreover, to accelerate the learning performance of the presented algorithms, the idea of transfer learning is integrated. The philosophy behind transfer learning is to allow well-established and expert cognitive agents (i.e. base stations or mobile stations in the context of wireless communications) to teach newly activated and naive agents. Exchange of learned information is used to improve the learning performance of a distributed CR network. This thesis further identifies the best practices to transfer knowledge between CRs so as to reduce the communication overhead.
The investigations in this thesis propose a novel technique which is able to accurately predict the modulation scheme and channel coding rate used in a primary link without the need to exchange information between the two networks (e.g. access to feedback channels), while succeeding in the main goal of determining the transmit power of the CRs such that the interference they create remains below the maximum threshold that the primary network can sustain with minimal effect on the average throughput. The investigations in this thesis also provide a physical-layer as well as a cross-layer machine learning-based algorithms to address the challenge of resource allocation in underlay cognitive radio networks, resulting in better learning performance and reduced communication overhead
- …