9 research outputs found
Machine Learning in Digital Signal Processing for Optical Transmission Systems
The future demand for digital information will exceed the capabilities of current optical communication systems, which are approaching their limits due to component and fiber intrinsic non-linear effects. Machine learning methods are promising to find new ways of leverage the available resources and to explore new solutions. Although, some of the machine learning methods such as adaptive non-linear filtering and probabilistic modeling are not novel in the field of telecommunication, enhanced powerful architecture designs together with increasing computing power make it possible to tackle more complex problems today. The methods presented in this work apply machine learning on optical communication systems with two main contributions. First, an unsupervised learning algorithm with embedded additive white Gaussian noise (AWGN) channel and appropriate power constraint is trained end-to-end, learning a geometric constellation shape for lowest bit-error rates over amplified and unamplified links. Second, supervised machine learning methods, especially deep neural networks with and without internal cyclical connections, are investigated to combat linear and non-linear inter-symbol interference (ISI) as well as colored noise effects introduced by the components and the fiber. On high-bandwidth coherent optical transmission setups their performances and complexities are experimentally evaluated and benchmarked against conventional digital signal processing (DSP) approaches. This thesis shows how machine learning can be applied to optical communication systems. In particular, it is demonstrated that machine learning is a viable designing and DSP tool to increase the capabilities of optical communication systems
Machine learning techniques for self-interference cancellation in full-duplex systems
Full-duplex (FD), enabling remote parties to transfer information simultaneously in
both directions and in the same bandwidth, has been envisioned as an important
technology for the next-generation wireless networks. This is due to the ability to
leverage both time and frequency resources and theoretically double the spectral efficiency. Enabling the FD communications is, however, highly challenging due to the
self-interference (SI), a leakage signal from the FD transmitter (Tx) to its own receiver
(Rx). The power of the SI is significantly higher when compared with the signal of
interest (SoI) from a remote node due to the proximity of the Tx to its co-located Rx.
The SI signal is thus swamping the SoI and degrading the FD system's performance.
Traditional self-interference cancellation (SIC) approaches, spanning the propagation,
analog, and/or digital domains, have been explored to cancel the SI in FD
transceivers. Particularly, digital domain cancellation is typically performed using
model-driven approaches, which have proven to be effective for SIC; however, they
could impose additional cost, hardware, memory, and/or computational requirements.
Motivated by the aforementioned, this thesis aims to apply data-driven machine
learning (ML)-assisted SIC approaches to cancel the SI in FD transceivers|in the digital
domain|and address the extra requirements imposed by the traditional methods.
Specifically, in Chapter 2, two grid-based neural network (NN) structures, referred
to as ladder-wise grid structure and moving-window grid structure, are proposed to
model the SI in FD transceivers with lower memory and computational requirements
than the literature benchmarks. Further reduction in the computational complexity
is provided in Chapter 3, where two hybrid-layers NN structures, referred to as
hybrid-convolutional recurrent NN and hybrid-convolutional recurrent dense NN, are
proposed to model the FD SI. The proposed hybrid NN structures exhibit lower computational
requirements than the grid-based structures and without degradation in the
SIC performance. In Chapter 4, an output-feedback NN structure, referred to as the
dual neurons-` hidden layers NN, is designed to model the SI in FD transceivers with
less memory and computational requirements than the grid-based and hybrid-layers
NN structures and without any additional deterioration to the SIC performance.
In Chapter 5, support vector regressors (SVRs), variants of support vector machines,
are proposed to cancel the SI in FD transceivers. A case study to assess the
performance of SVR-based approaches compared to the classical and other ML-based
approaches, using different performance metrics and two different test setups, is also
provided in this chapter. The SVR-based SIC approaches are able to reduce the training
time compared to the NN-based approaches, which are, contrarily, shown to be
more efficient in terms of SIC, especially when high transmit power levels are utilized.
To further enhance the performance/complexity of the ML approaches provided
in Chapter 5, two learning techniques are investigated in Chapters 6 and 7. Specifically,
in Chapter 6, the concept of residual learning is exploited to develop an NN
structure, referred to as residual real-valued time-delay NN, to model the FD SI with
lower computational requirements than the benchmarks of Chapter 5. In Chapter 7,
a fast and accurate learning algorithm, namely extreme learning machine, is proposed
to suppress the SI in FD transceivers with a higher SIC performance and lower training
overhead than the benchmarks of Chapter 5. Finally, in Chapter 8, the thesis
conclusions are provided and the directions for future research are highlighted
Linear Operation of Switch-Mode Outphasing Power Amplifiers
Radio transceivers are playing an increasingly important role in modern society. The
”connected” lifestyle has been enabled by modern wireless communications. The demand
that has been placed on current wireless and cellular infrastructure requires increased spectral
efficiency however this has come at the cost of power efficiency. This work investigates
methods of improving wireless transceiver efficiency by enabling more efficient power
amplifier architectures, specifically examining the role of switch-mode power amplifiers in
macro cell scenarios. Our research focuses on the mechanisms within outphasing power
amplifiers which prevent linear amplification. From the analysis it was clear that high power
non-linear effects are correctable with currently available techniques however non-linear effects
around the zero crossing point are not. As a result signal processing techniques for suppressing
and avoiding non-linear operation in low power regions are explored. A novel method of digital
pre-distortion is presented, and conventional techniques for linearisation are adapted for the
particular needs of the outphasing power amplifier. More unconventional signal processing
techniques are presented to aid linearisation of the outphasing power amplifier, both zero
crossing and bandwidth expansion reduction methods are designed to avoid operation in nonlinear
regions of the amplifiers. In combination with digital pre-distortion the techniques
will improve linearisation efforts on outphasing systems with dynamic range and bandwidth
constraints respectively.
Our collaboration with NXP provided access to a digital outphasing power amplifier,
enabling empirical analysis of non-linear behaviour and comparative analysis of behavioural
modelling and linearisation efforts. The collaboration resulted in a bench mark for linear
wideband operation of a digital outphasing power amplifier. The complimentary linearisation
techniques, bandwidth expansion reduction and zero crossing reduction have been evaluated in
both simulated and practical outphasing test benches. Initial results are promising and indicate
that the benefits they provide are not limited to the outphasing amplifier architecture alone.
Overall this thesis presents innovative analysis of the distortion mechanisms of the
outphasing power amplifier, highlighting the sensitivity of the system to environmental effects.
Practical and novel linearisation techniques are presented, with a focus on enabling wide band
operation for modern communications standards
Machine Learning-Enabled Resource Allocation for Underlay Cognitive Radio Networks
Due to the rapid growth of new wireless communication services and applications, much attention has been directed to frequency spectrum resources and the way they are regulated. Considering that the radio spectrum is a natural limited resource, supporting the ever increasing demands for higher capacity and higher data rates for diverse sets of users, services and applications is a challenging task which requires innovative technologies capable of providing new ways of efficiently exploiting the available radio spectrum. Consequently, dynamic spectrum access (DSA) has been proposed as a replacement for static spectrum allocation policies. The DSA is implemented in three modes including interweave, overlay and underlay mode [1].
The key enabling technology for DSA is cognitive radio (CR), which is among the core prominent technologies for the next generation of wireless communication systems. Unlike conventional radio which is restricted to only operate in designated spectrum bands, a CR has the capability to operate in different spectrum bands owing to its ability in sensing, understanding its wireless environment, learning from past experiences and proactively changing the transmission parameters as needed. These features for CR are provided by an intelligent software package called the cognitive engine (CE). In general, the CE manages radio resources to accomplish cognitive functionalities and allocates and adapts the radio resources to optimize the performance of the network. Cognitive functionality of the CE can be achieved by leveraging machine learning techniques. Therefore, this thesis explores the application of two machine learning techniques in enabling the cognition capability of CE. The two considered machine learning techniques are neural network-based supervised learning and reinforcement learning. Specifically, this thesis develops resource allocation algorithms that leverage the use of machine learning techniques to find the solution to the resource allocation problem for heterogeneous underlay cognitive radio networks (CRNs). The proposed algorithms are evaluated under extensive simulation runs.
The first resource allocation algorithm uses a neural network-based learning paradigm to present a fully autonomous and distributed underlay DSA scheme where each CR operates based on predicting its transmission effect on a primary network (PN). The scheme is based on a CE with an artificial neural network that predicts the adaptive modulation and coding configuration for the primary link nearest to a transmitting CR, without exchanging information between primary and secondary networks. By managing the effect of the secondary network (SN) on the primary network, the presented technique maintains the relative average throughput change in the primary network within a prescribed maximum value, while also finding transmit settings for the CRs that result in throughput as large as allowed by the primary network interference limit.
The second resource allocation algorithm uses reinforcement learning and aims at distributively maximizing the average quality of experience (QoE) across transmission of CRs with different types of traffic while satisfying a primary network interference constraint. To best satisfy the QoE requirements of the delay-sensitive type of traffics, a cross-layer resource allocation algorithm is derived and its performance is compared against a physical-layer algorithm in terms of meeting end-to-end traffic delay constraints. Moreover, to accelerate the learning performance of the presented algorithms, the idea of transfer learning is integrated. The philosophy behind transfer learning is to allow well-established and expert cognitive agents (i.e. base stations or mobile stations in the context of wireless communications) to teach newly activated and naive agents. Exchange of learned information is used to improve the learning performance of a distributed CR network. This thesis further identifies the best practices to transfer knowledge between CRs so as to reduce the communication overhead.
The investigations in this thesis propose a novel technique which is able to accurately predict the modulation scheme and channel coding rate used in a primary link without the need to exchange information between the two networks (e.g. access to feedback channels), while succeeding in the main goal of determining the transmit power of the CRs such that the interference they create remains below the maximum threshold that the primary network can sustain with minimal effect on the average throughput. The investigations in this thesis also provide a physical-layer as well as a cross-layer machine learning-based algorithms to address the challenge of resource allocation in underlay cognitive radio networks, resulting in better learning performance and reduced communication overhead
Synchronization in OFDM communication systems
EThOS - Electronic Theses Online ServiceGBUnited Kingdo