17,980 research outputs found
On the Selection of Tuning Methodology of FOPID Controllers for the Control of Higher Order Processes
In this paper, a comparative study is done on the time and frequency domain
tuning strategies for fractional order (FO) PID controllers to handle higher
order processes. A new fractional order template for reduced parameter modeling
of stable minimum/non-minimum phase higher order processes is introduced and
its advantage in frequency domain tuning of FOPID controllers is also
presented. The time domain optimal tuning of FOPID controllers have also been
carried out to handle these higher order processes by performing optimization
with various integral performance indices. The paper highlights on the
practical control system implementation issues like flexibility of online
autotuning, reduced control signal and actuator size, capability of measurement
noise filtration, load disturbance suppression, robustness against parameter
uncertainties etc. in light of the above tuning methodologies.Comment: 27 pages, 10 figure
Control Regularization for Reduced Variance Reinforcement Learning
Dealing with high variance is a significant challenge in model-free
reinforcement learning (RL). Existing methods are unreliable, exhibiting high
variance in performance from run to run using different initializations/seeds.
Focusing on problems arising in continuous control, we propose a functional
regularization approach to augmenting model-free RL. In particular, we
regularize the behavior of the deep policy to be similar to a policy prior,
i.e., we regularize in function space. We show that functional regularization
yields a bias-variance trade-off, and propose an adaptive tuning strategy to
optimize this trade-off. When the policy prior has control-theoretic stability
guarantees, we further show that this regularization approximately preserves
those stability guarantees throughout learning. We validate our approach
empirically on a range of settings, and demonstrate significantly reduced
variance, guaranteed dynamic stability, and more efficient learning than deep
RL alone.Comment: Appearing in ICML 201
Stability and Performance Verification of Optimization-based Controllers
This paper presents a method to verify closed-loop properties of
optimization-based controllers for deterministic and stochastic constrained
polynomial discrete-time dynamical systems. The closed-loop properties amenable
to the proposed technique include global and local stability, performance with
respect to a given cost function (both in a deterministic and stochastic
setting) and the gain. The method applies to a wide range of
practical control problems: For instance, a dynamical controller (e.g., a PID)
plus input saturation, model predictive control with state estimation, inexact
model and soft constraints, or a general optimization-based controller where
the underlying problem is solved with a fixed number of iterations of a
first-order method are all amenable to the proposed approach.
The approach is based on the observation that the control input generated by
an optimization-based controller satisfies the associated Karush-Kuhn-Tucker
(KKT) conditions which, provided all data is polynomial, are a system of
polynomial equalities and inequalities. The closed-loop properties can then be
analyzed using sum-of-squares (SOS) programming
Time-varying partitioning for predictive control design: density-games approach
The design of distributed optimization-based controllers for large-scale systems (LSSs) implies every time new challenges. The fact that LSSs are generally located throughout large geographical areas makes dicult the recollection of measurements and their transmission. In this regard, the communication network that is required for a centralized control approach might have high associated economic costs. Furthermore, the computation of a large amount of data implies a high computational burden to manage, process and use them in order to make decisions over the system operation. A plausible solution to mitigate the aforementioned issues associated with the control of LSSs consists in dividing this type of systems into smaller sub-systems able to be handled by independent local controllers. This paper studies two fundamental components of the design of distributed optimization-based controllers for LSSs, i.e., the system partitioning and distributed optimization algorithms. The design of distributed model predictive control (DMPC) strategies with a system partitioning and by using density-dependent population games (DDPG) is presented.Peer ReviewedPostprint (author's final draft
- …