6 research outputs found

    Using Actor-Critic Reinforcement Learning for Control of a Quadrotor Dynamics

    Get PDF
    This paper presents a quadrotor controller using reinforcement learning to generate near-optimal control signals. Two actor-critic algorithms are trained to control quadrotor dynamics. The dynamics are further simplified using small angle approximation. The actor-critic algorithm’s control policy is derived from Bellman’s equation providing a sufficient condition to optimality. Additionally, a smoother converter is implemented into the trajectory providing more reliable results. This paper provides derivations to the quadrotor’s dynamics and explains the control using the actor-critic algorithm. The results and simulations are compared to solutions from a commercial, optimal control solver, called DIDO

    Approximate dynamic programming based solutions for fixed-final-time optimal control and optimal switching

    Get PDF
    Optimal solutions with neural networks (NN) based on an approximate dynamic programming (ADP) framework for new classes of engineering and non-engineering problems and associated difficulties and challenges are investigated in this dissertation. In the enclosed eight papers, the ADP framework is utilized for solving fixed-final-time problems (also called terminal control problems) and problems with switching nature. An ADP based algorithm is proposed in Paper 1 for solving fixed-final-time problems with soft terminal constraint, in which, a single neural network with a single set of weights is utilized. Paper 2 investigates fixed-final-time problems with hard terminal constraints. The optimality analysis of the ADP based algorithm for fixed-final-time problems is the subject of Paper 3, in which, it is shown that the proposed algorithm leads to the global optimal solution providing certain conditions hold. Afterwards, the developments in Papers 1 to 3 are used to tackle a more challenging class of problems, namely, optimal control of switching systems. This class of problems is divided into problems with fixed mode sequence (Papers 4 and 5) and problems with free mode sequence (Papers 6 and 7). Each of these two classes is further divided into problems with autonomous subsystems (Papers 4 and 6) and problems with controlled subsystems (Papers 5 and 7). Different ADP-based algorithms are developed and proofs of convergence of the proposed iterative algorithms are presented. Moreover, an extension to the developments is provided for online learning of the optimal switching solution for problems with modeling uncertainty in Paper 8. Each of the theoretical developments is numerically analyzed using different real-world or benchmark problems --Abstract, page v

    Fixed-Final-Time Optimal Control of Nonlinear Systems with Terminal Constraints

    No full text
    A model-based reinforcement learning algorithm is developed in this paper for fixed-final-time optimal control of nonlinear systems with soft and hard terminal constraints. Convergence of the algorithm, for linear in the weights neural networks, is proved through a novel idea by showing that the training algorithm is a contraction mapping. Once trained, the developed neurocontroller is capable of solving this class of optimal control problems for different initial conditions, different final times, and different terminal constraint surfaces providing some mild conditions hold. Three examples are provided and the numerical results demonstrate the versatility and the potential of the developed technique
    corecore