Machine learning techniques applied to dimensionality reduction for digital predistortion linearizers

Abstract

Over the past half century, the improvements in spacecraft technology have been primarily in the areas of microelectronics for on-board processing, high frequency electronic devices, and integrated circuits for communications and navigation, solar cells and batteries for on-board power generation and storage among many others. Despite the fact that energy-storage technologies have advanced dramatically over the past years, the power consumption of on-board communications, sensors and digital signal processing systems is of paramount importance in battery or solar powered systems such as small satellites, HAPs or UAVs (drones). There is multiple applications that involves the use of these systems, e.g., Earth observation applications, surveillance, broadcast communications, scientific research, etc. In wireless communications, the power amplifier is a critical subsystem in the transmitter chain. Not only because it is one of the most power hungry devices that accounts for most of the direct current power consumption, but also because it is the main source of nonlinear distortion in the transmitter. Amplitude and phase modulated communications signals presenting high peak-to-average power ratio have a negative impact in the transmitter's power efficiency, because the PA has to be operated at high power back-off levels to avoid introducing nonlinear distortion. Digital predistortion (DPD) linearization is the most common and spread solution to cope with power amplifiers (PA) inherent linearity versus efficiency trade-off. When considering wide bandwidth signals, such as Doherty PAs, envelope tracking PAs or outphasing transmitters, the number of parameters required in the DPD model to compensate for both nonlinearities and memory effects can be very high. This has a negative impact in the DPD ceofficients extraction, because increases the computational complexity and drives to over-fitting and uncertainty. However, by applying dimensionality reduction techniques we can both avoid the numerical ill-conditioning of the estimation and reduce the number of coefficients of the DPD function, which ultimately impacts the baseband processing computational complexity and power consumption. In this Project, several dimensionality reduction techniques will be described and compared in terms of model order reduction capabilities and evaluation performance. In particular, some of the machine learning techniques for dimensionality reduction will be studied

    Similar works