276 research outputs found

    Cross-Layer QoE Improvement with Dynamic Spectrum Allocation in OFDM-Based Cognitive Radio.

    Get PDF
    PhDRapid development of devices and applications results in dramatic growth of wireless tra c, which leads to increasing demand on wire- less spectrum resources. Current spectrum resource allocation pol- icy causes low e ciency in licensed spectrum bands. Cognitive Ra- dio techniques are a promising solution to the problem of spectrum scarcity and low spectrum utilisation. Especially, OFDM based Cog- nitive Radio has received much research interest due to its exibility in enabling dynamic resource allocation. Extensive research has shown how to optimise Cognitive Radio networks in many ways, but there has been little consideration of the real-time packet level performance of the network. In such a situation, the Quality of Service metrics of the Secondary Network are di cult to guarantee due to uctuating resource availability; nevertheless QoS metric evaluation is actually a very important factor for the success of Cognitive Radio. Quality of Experience is also gaining interest due to its focus on the users' per- ceived quality, and this opens up a new perspective on evaluating and improving wireless networks performance. The main contributions of this thesis include: it focuses on the real-time packet level QoS (packet delay and loss) performance of Cognitive Radio networks, and eval- uates the e ects on QoS of several typical non-con gurable factors including secondary user service types, primary user activity patterns and user distance from base station. Furthermore, the evaluation results are uni ed and represented using QoE through existing map- ping techniques. Based on the QoE evaluation, a novel cross layer RA scheme is proposed to dynamically compensate user experience, and this is shown to signi cantly improve QoE in scenarios where traditional RA schemes fail to provide good user experience

    EVM as generic QoS trigger for heterogeneous wieless overlay network

    Full text link
    Fourth Generation (4G) Wireless System will integrate heterogeneous wireless overlay systems i.e. interworking of WLAN/ GSM/ CDMA/ WiMAX/ LTE/ etc with guaranteed Quality of Service (QoS) and Experience (QoE).QoS(E) vary from network to network and is application sensitive. User needs an optimal mobility solution while roaming in Overlaid wireless environment i.e. user could seamlessly transfer his session/ call to a best available network bearing guaranteed Quality of Experience. And If this Seamless transfer of session is executed between two networks having different access standards then it is called Vertical Handover (VHO). Contemporary VHO decision algorithms are based on generic QoS metrics viz. SNR, bandwidth, jitter, BER and delay. In this paper, Error Vector Magnitude (EVM) is proposed to be a generic QoS trigger for VHO execution. EVM is defined as the deviation of inphase/ quadrature (I/Q) values from ideal signal states and thus provides a measure of signal quality. In 4G Interoperable environment, OFDM is the leading Modulation scheme (more prone to multi-path fading). EVM (modulation error) properly characterises the wireless link/ channel for accurate VHO decision. EVM depends on the inherent transmission impairments viz. frequency offset, phase noise, non-linear-impairment, skewness etc. for a given wireless link. Paper provides an insight to the analytical aspect of EVM & measures EVM (%) for key management subframes like association/re-association/disassociation/ probe request/response frames. EVM relation is explored for different possible NAV-Network Allocation Vectors (frame duration). Finally EVM is compared with SNR, BER and investigation concludes EVM as a promising QoS trigger for OFDM based emerging wireless standards.Comment: 12 pages, 7 figures, IJWMN 2010 august issue vol. 2, no.

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    Machine Learning-Enabled Resource Allocation for Underlay Cognitive Radio Networks

    Get PDF
    Due to the rapid growth of new wireless communication services and applications, much attention has been directed to frequency spectrum resources and the way they are regulated. Considering that the radio spectrum is a natural limited resource, supporting the ever increasing demands for higher capacity and higher data rates for diverse sets of users, services and applications is a challenging task which requires innovative technologies capable of providing new ways of efficiently exploiting the available radio spectrum. Consequently, dynamic spectrum access (DSA) has been proposed as a replacement for static spectrum allocation policies. The DSA is implemented in three modes including interweave, overlay and underlay mode [1]. The key enabling technology for DSA is cognitive radio (CR), which is among the core prominent technologies for the next generation of wireless communication systems. Unlike conventional radio which is restricted to only operate in designated spectrum bands, a CR has the capability to operate in different spectrum bands owing to its ability in sensing, understanding its wireless environment, learning from past experiences and proactively changing the transmission parameters as needed. These features for CR are provided by an intelligent software package called the cognitive engine (CE). In general, the CE manages radio resources to accomplish cognitive functionalities and allocates and adapts the radio resources to optimize the performance of the network. Cognitive functionality of the CE can be achieved by leveraging machine learning techniques. Therefore, this thesis explores the application of two machine learning techniques in enabling the cognition capability of CE. The two considered machine learning techniques are neural network-based supervised learning and reinforcement learning. Specifically, this thesis develops resource allocation algorithms that leverage the use of machine learning techniques to find the solution to the resource allocation problem for heterogeneous underlay cognitive radio networks (CRNs). The proposed algorithms are evaluated under extensive simulation runs. The first resource allocation algorithm uses a neural network-based learning paradigm to present a fully autonomous and distributed underlay DSA scheme where each CR operates based on predicting its transmission effect on a primary network (PN). The scheme is based on a CE with an artificial neural network that predicts the adaptive modulation and coding configuration for the primary link nearest to a transmitting CR, without exchanging information between primary and secondary networks. By managing the effect of the secondary network (SN) on the primary network, the presented technique maintains the relative average throughput change in the primary network within a prescribed maximum value, while also finding transmit settings for the CRs that result in throughput as large as allowed by the primary network interference limit. The second resource allocation algorithm uses reinforcement learning and aims at distributively maximizing the average quality of experience (QoE) across transmission of CRs with different types of traffic while satisfying a primary network interference constraint. To best satisfy the QoE requirements of the delay-sensitive type of traffics, a cross-layer resource allocation algorithm is derived and its performance is compared against a physical-layer algorithm in terms of meeting end-to-end traffic delay constraints. Moreover, to accelerate the learning performance of the presented algorithms, the idea of transfer learning is integrated. The philosophy behind transfer learning is to allow well-established and expert cognitive agents (i.e. base stations or mobile stations in the context of wireless communications) to teach newly activated and naive agents. Exchange of learned information is used to improve the learning performance of a distributed CR network. This thesis further identifies the best practices to transfer knowledge between CRs so as to reduce the communication overhead. The investigations in this thesis propose a novel technique which is able to accurately predict the modulation scheme and channel coding rate used in a primary link without the need to exchange information between the two networks (e.g. access to feedback channels), while succeeding in the main goal of determining the transmit power of the CRs such that the interference they create remains below the maximum threshold that the primary network can sustain with minimal effect on the average throughput. The investigations in this thesis also provide a physical-layer as well as a cross-layer machine learning-based algorithms to address the challenge of resource allocation in underlay cognitive radio networks, resulting in better learning performance and reduced communication overhead
    corecore