Skip to main content
Article thumbnail
Location of Repository

Data-driven virtual reference feedback tuning and reinforcement Qlearning for model-free position control of an aerodynamic system

By Mircia B. Radac, Radu-Emil Precup and Raul C. Roman

Abstract

This paper compares a linear Virtual Reference Feedback Tuning model-free technique applied to feedback controller tuning based on input-output data with two Reinforcement Qlearning model-free nonlinear state feedback controllers that are tuned using input-state experimental data (ED) in terms of two separate learning techniques. The tuning of the state feedback controllers is done in a model reference setting that aims at linearizing the control system (CS) in a wide operating range. The two learning techniques are validated on a position control case study for an open-loop stable aerodynamic system. The performance comparison of our tuning techniques is discussed in terms of their structural complexity, CS performance, and amount of ED needed for learning

Topics: aerodynamics, feedback, feedback control, learning algorithms, learning systems, nonlinear control systems, position control, reinforcement, state feedback, aerodynamic systems, feedback controller, learning techniques, nonlinear state feedbacks, performance comparison, state feedback controller, structural complexity, virtual reference feedback tuning, Aerodynamics and Fluid Mechanics
Publisher: Edith Cowan University, Research Online, Perth, Western Australia
Year: 2016
DOI identifier: 10.1109/MED.2016.7535876
OAI identifier: oai:ro.ecu.edu.au:ecuworkspost2013-3234
Provided by: Research Online @ ECU
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://dx.doi.org/10.1109/MED.... (external link)
  • http://ro.ecu.edu.au/ecuworksp... (external link)
  • Suggested articles


    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.