A Master of Science thesis in Electrical Engineering by Mohammad Rabih Aziz entitled, “Low-Complexity Machine Learning-based Behavioral Modeling of Power Amplifiers”, submitted in July 2025. Thesis advisor is Dr. Oualid Hammi. Soft copy is available (Thesis, Completion Certificate, Approval Signatures, and AUS Archives Consent Form).Linearization is the process of countering the effects of the distortions introduced by power amplifiers when they are driven close to saturation. Digital Predistortion is a popular linearization technique in which a predistorter distorts the input to the power amplifier by applying an inverse function to its behavioral model. The result is a distortion-free output from the transmitter. In this way, behavioral modeling is an important aspect of the linearization process. Among the various behavioral models that have been studied over the years, Neural Networks, or Multiayer Perceptrons, have gained popularity for their ability to capture intricate and dynamic details of the power amplifier’s behavior. Interestingly, it is of great interest to reduce the complexity of these models, as the predistorters are often constrained by computational power and storage limitations. With this motivation, clever preprocessing, optimal model selection, unstructured pruning and quantization are investigated in this work. Specifically, networks with two different input basis functions – RVTDNN and ARVTDNN – are trained on selectively sampled data and optimal models among the pool of implemented models are selected using the Bayesian Information and Akaike Information Criteria. Then, pruning and quantization are applied to the set of optimally selected memory models. Additionally, three more metrics, the NMSE, MSE, and storage size, are used for a comprehensive quantitative analysis of the complexity-performance paradigm. As per the findings, pruning resulted in significant model compression, in terms of the number of parameters, and little impact on the performance for up to 30% sparsity in both models. Further model compression, in terms of storage size, was also observed for both models and their sparse versions after quantization. Moreover, this work introduces Kolmogorov-Arnold Networks for the first time in the discourse on power amplifier behavioral modeling. The results show that two implemented models – RVTDKAN and ARVTDKAN – achieved an NMSE of -37.78 dB and -38.03 dB, respectively, outperforming their Multilayer Perceptron counterparts and demonstrating superior modeling capabilities with a lower parameter count.College of EngineeringDepartment of Electrical EngineeringMaster of Science in Electrical Engineering (MSEE
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.