499 research outputs found
Power adaptation for cognitive radio systems under an average sinr loss constraint in the absence of path loss information
An upper bound is derived on the capacity of a cognitive radio system by considering the effects of path loss and log-normal shadowing simultaneously for a single-cell network. Assuming that the cognitive radio is informed only of the shadow fading between the secondary (cognitive) transmitter and primary receiver, the capacity is achieved via the water-filling power allocation strategy under an average primary signal to secondary interference plus noise ratio loss constraint. Contrary to the perfect channel state information requirement at the secondary system (SS), the transmit power control of the SS is accomplished in the absence of any path loss estimates. For this purpose, a method for estimating the instantaneous value of the shadow fading is also presented. A detailed analysis of the proposed power adaptation strategy is conducted through various numerical simulations. © 2013 Springer Science+Business Media New York
Permutation Trellis Coded Multi-level FSK Signaling to Mitigate Primary User Interference in Cognitive Radio Networks
We employ Permutation Trellis Code (PTC) based multi-level Frequency Shift
Keying signaling to mitigate the impact of Primary Users (PUs) on the
performance of Secondary Users (SUs) in Cognitive Radio Networks (CRNs). The
PUs are assumed to be dynamic in that they appear intermittently and stay
active for an unknown duration. Our approach is based on the use of PTC
combined with multi-level FSK modulation so that an SU can improve its data
rate by increasing its transmission bandwidth while operating at low power and
not creating destructive interference for PUs. We evaluate system performance
by obtaining an approximation for the actual Bit Error Rate (BER) using
properties of the Viterbi decoder and carry out a thorough performance analysis
in terms of BER and throughput. The results show that the proposed coded system
achieves i) robustness by ensuring that SUs have stable throughput in the
presence of heavy PU interference and ii) improved resiliency of SU links to
interference in the presence of multiple dynamic PUs.Comment: 30 pages, 12 figure
Machine Learning-Enabled Resource Allocation for Underlay Cognitive Radio Networks
Due to the rapid growth of new wireless communication services and applications, much attention has been directed to frequency spectrum resources and the way they are regulated. Considering that the radio spectrum is a natural limited resource, supporting the ever increasing demands for higher capacity and higher data rates for diverse sets of users, services and applications is a challenging task which requires innovative technologies capable of providing new ways of efficiently exploiting the available radio spectrum. Consequently, dynamic spectrum access (DSA) has been proposed as a replacement for static spectrum allocation policies. The DSA is implemented in three modes including interweave, overlay and underlay mode [1].
The key enabling technology for DSA is cognitive radio (CR), which is among the core prominent technologies for the next generation of wireless communication systems. Unlike conventional radio which is restricted to only operate in designated spectrum bands, a CR has the capability to operate in different spectrum bands owing to its ability in sensing, understanding its wireless environment, learning from past experiences and proactively changing the transmission parameters as needed. These features for CR are provided by an intelligent software package called the cognitive engine (CE). In general, the CE manages radio resources to accomplish cognitive functionalities and allocates and adapts the radio resources to optimize the performance of the network. Cognitive functionality of the CE can be achieved by leveraging machine learning techniques. Therefore, this thesis explores the application of two machine learning techniques in enabling the cognition capability of CE. The two considered machine learning techniques are neural network-based supervised learning and reinforcement learning. Specifically, this thesis develops resource allocation algorithms that leverage the use of machine learning techniques to find the solution to the resource allocation problem for heterogeneous underlay cognitive radio networks (CRNs). The proposed algorithms are evaluated under extensive simulation runs.
The first resource allocation algorithm uses a neural network-based learning paradigm to present a fully autonomous and distributed underlay DSA scheme where each CR operates based on predicting its transmission effect on a primary network (PN). The scheme is based on a CE with an artificial neural network that predicts the adaptive modulation and coding configuration for the primary link nearest to a transmitting CR, without exchanging information between primary and secondary networks. By managing the effect of the secondary network (SN) on the primary network, the presented technique maintains the relative average throughput change in the primary network within a prescribed maximum value, while also finding transmit settings for the CRs that result in throughput as large as allowed by the primary network interference limit.
The second resource allocation algorithm uses reinforcement learning and aims at distributively maximizing the average quality of experience (QoE) across transmission of CRs with different types of traffic while satisfying a primary network interference constraint. To best satisfy the QoE requirements of the delay-sensitive type of traffics, a cross-layer resource allocation algorithm is derived and its performance is compared against a physical-layer algorithm in terms of meeting end-to-end traffic delay constraints. Moreover, to accelerate the learning performance of the presented algorithms, the idea of transfer learning is integrated. The philosophy behind transfer learning is to allow well-established and expert cognitive agents (i.e. base stations or mobile stations in the context of wireless communications) to teach newly activated and naive agents. Exchange of learned information is used to improve the learning performance of a distributed CR network. This thesis further identifies the best practices to transfer knowledge between CRs so as to reduce the communication overhead.
The investigations in this thesis propose a novel technique which is able to accurately predict the modulation scheme and channel coding rate used in a primary link without the need to exchange information between the two networks (e.g. access to feedback channels), while succeeding in the main goal of determining the transmit power of the CRs such that the interference they create remains below the maximum threshold that the primary network can sustain with minimal effect on the average throughput. The investigations in this thesis also provide a physical-layer as well as a cross-layer machine learning-based algorithms to address the challenge of resource allocation in underlay cognitive radio networks, resulting in better learning performance and reduced communication overhead
Optimization of the interoperability and dynamic spectrum management in mobile communications systems beyond 3G
The future wireless ecosystem will heterogeneously integrate a number of overlapped Radio
Access Technologies (RATs) through a common platform. A major challenge arising from the
heterogeneous network is the Radio Resource Management (RRM) strategy. A Common RRM
(CRRM) module is needed in order to provide a step toward network convergence. This work
aims at implementing HSDPA and IEEE 802.11e CRRM evaluation tools.
Innovative enhancements to IEEE 802.11e have been pursued on the application of cross-layer
signaling to improve Quality of Service (QoS) delivery, and provide more efficient usage of
radio resources by adapting such parameters as arbitrary interframe spacing, a differentiated
backoff procedure and transmission opportunities, as well as acknowledgment policies (where
the most advised block size was found to be 12). Besides, the proposed cross-layer algorithm
dynamically changes the size of the Arbitration Interframe Space (AIFS) and the Contention
Window (CW) duration according to a periodically obtained fairness measure based on the Signal
to Interference-plus-Noise Ratio (SINR) and transmission time, a delay constraint and the
collision rate of a given machine. The throughput was increased in 2 Mb/s for all the values of
the load that have been tested whilst satisfying more users than with the original standard. For
the ad hoc mode an analytical model was proposed that allows for investigating collision free
communications in a distributed environment.
The addition of extra frequency spectrum bands and an integrated CRRM that enables spectrum
aggregation was also addressed. RAT selection algorithms allow for determining the gains obtained
by using WiFi as a backup network for HSDPA. The proposed RAT selection algorithm
is based on the load of each system, without the need for a complex management system. Simulation
results show that, in such scenario, for high system loads, exploiting localization while
applying load suitability optimization based algorithm, can provide a marginal gain of up to
450 kb/s in the goodput. HSDPA was also studied in the context of cognitive radio, by considering
two co-located BSs operating at different frequencies (in the 2 and 5 GHz bands) in the
same cell. The system automatically chooses the frequency to serve each user with an optimal
General Multi-Band Scheduling (GMBS) algorithm. It was shown that enabling the access to
a secondary band, by using the proposed Integrated CRRM (iCRRM), an almost constant gain
near 30 % was obtained in the throughput with the proposed optimal solution, compared to a
system where users are first allocated in one of the two bands and later not able to handover
between the bands. In this context, future cognitive radio scenarios where IEEE 802.11e ad hoc
modes will be essential for giving access to the mobile users have been proposed
- …