163 research outputs found
Stabilization of Continuous-Time Adaptive Control Systems with Possible Input Saturation through a Controllable Modified Estimation Model
This paper presents an indirect adaptive control scheme for linear continuous-time systems. The estimated plant model is controllable and then the adaptive scheme is free from singularities. Such singularities are avoided through a modification of the estimated plant parameter vector so that its associated Sylvester matrix is guaranteed to be nonsingular. That property is achieved by ensuring that the absolute value of its determinant does not lie below a positive threshold. An alternative modification scheme based on the achievement of a modified diagonally dominant Sylvester matrix of the parameter estimates is also proposed. This diagonal dominance is achieved through estimates modification as a way to guarantee the controllability of the modified estimated model when a controllability measure of the estimation model without modification fails. In both schemes, the use of a hysteresis switching function for the modification of the estimates is not required to ensure the controllability of the modified estimated model. Both schemes ensure that chattering due to switches associated with the modification is not present. The results are extended to the first-order case when the input is subject to saturation being modeled as a sigmoid function. In this case, a hysteresis-type switching law is used to implement the estimates modification
Incorporating Risk into Control Design for Emergency Operation of Turbo-Fan Engines
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/90650/1/AIAA-2011-1591-262.pd
Model reference adaptive control of a two axes hydraulic manipulator
Dissertation submitted for obtain the degree of Doctor, at the University of Bat
Recurrent neural networks and adaptive motor control
This thesis is concerned with the use of neural networks for motor control tasks. The main goal of the thesis is to investigate ways in which the biological notions of motor programs and Central Pattern Generators (CPGs) may be implemented in a neural network framework. Biological CPGs can be seen as components within a larger control scheme, which is basically modular in design. In this thesis, these ideas are investigated through the use of modular recurrent networks, which are used in a variety of control tasks.
The first experimental chapter deals with learning in recurrent networks, and it is shown that CPGs may be easily implemented using the machinery of backpropagation. The use of these CPGs can aid the learning of pattern generation tasks; they can also mean that the other components in the system can be reduced in complexity, say, to a purely feedforward network. It is also shown that incremental learning, or 'shaping' is
an effective method for building CPGs. Genetic algorithms are also used to build CPGs; although computational effort prevents this from being a practical method, it does show that GAs are capable of optimising systems that operate in the context of a larger scheme. One interesting result from the GA is that optimal CPGs tend to have unstable dynamics, which may have implications for building modular neural controllers.
The next chapter applies these ideas to some simple control tasks involving a highly redundant simulated robot arm. It was shown that it is relatively straightforward to build CPGs that represent elements of pattern generation, constraint satisfaction. and local feedback. This is indirect control, in which errors are backpropagated through a plant model, as well as the ePG itself, to give errors for the controller.
Finally, the third experimental chapter takes an alternative approach, and uses direct control methods, such as reinforcement learning. In reinforcement learning, controller outputs have unmodelled effects; this allows us to build complex control systems, where outputs modulate the couplings between sets of dynamic systems. This was shown for a simple case, involving a system of coupled oscillators. A second set of experiments investigates the use of simplified models of behaviour; this is a reduced form of supervised learning, and the use of such models in control is discussed
Active Learning of Discrete-Time Dynamics for Uncertainty-Aware Model Predictive Control
Model-based control requires an accurate model of the system dynamics for
precisely and safely controlling the robot in complex and dynamic environments.
Moreover, in the presence of variations in the operating conditions, the model
should be continuously refined to compensate for dynamics changes. In this
paper, we present a self-supervised learning approach that actively models the
dynamics of nonlinear robotic systems. We combine offline learning from past
experience and online learning from current robot interaction with the unknown
environment. These two ingredients enable a highly sample-efficient and
adaptive learning process, capable of accurately inferring model dynamics in
real-time even in operating regimes that greatly differ from the training
distribution. Moreover, we design an uncertainty-aware model predictive
controller that is heuristically conditioned to the aleatoric (data)
uncertainty of the learned dynamics. This controller actively chooses the
optimal control actions that (i) optimize the control performance and (ii)
improve the efficiency of online learning sample collection. We demonstrate the
effectiveness of our method through a series of challenging real-world
experiments using a quadrotor system. Our approach showcases high resilience
and generalization capabilities by consistently adapting to unseen flight
conditions, while it significantly outperforms classical and adaptive control
baselines
- …