4,165 research outputs found

    Method And System For Dynamic Stochastic Optimal Electric Power Flow Control

    Get PDF
    A dynamic stochastic optimal power flow (DSOPF) control system is described for performing multi-objective optimal control capability in complex electrical power systems. The DSOPF system and method replaces the traditional adaptive critic designs (ACDs) and secondary voltage control, and provides a coordinated AC power flow control solution to the smart grid operation in an environment with high short-term uncertainty and variability. The DSOPF system and method is used to provide nonlinear optimal control, where the control objective is explicitly formulated to incorporate power system economy, stability and security considerations. The system and method dynamically drives a power system to its optimal operating point by continuously adjusting the steady-state set points sent by a traditional optimal power flow algorithm.Clemson UniversityGeorgia Tech Research CorporationThe Curators Of The University Of Missour

    Control Regularization for Reduced Variance Reinforcement Learning

    Get PDF
    Dealing with high variance is a significant challenge in model-free reinforcement learning (RL). Existing methods are unreliable, exhibiting high variance in performance from run to run using different initializations/seeds. Focusing on problems arising in continuous control, we propose a functional regularization approach to augmenting model-free RL. In particular, we regularize the behavior of the deep policy to be similar to a policy prior, i.e., we regularize in function space. We show that functional regularization yields a bias-variance trade-off, and propose an adaptive tuning strategy to optimize this trade-off. When the policy prior has control-theoretic stability guarantees, we further show that this regularization approximately preserves those stability guarantees throughout learning. We validate our approach empirically on a range of settings, and demonstrate significantly reduced variance, guaranteed dynamic stability, and more efficient learning than deep RL alone.Comment: Appearing in ICML 201

    Enhancing the performance of intelligent control systems in the face of higher levels of complexity and uncertainty

    Get PDF
    Modern advances in technology have led to more complex manufacturing processes whose success centres on the ability to control these processes with a very high level of accuracy. Plant complexity inevitably leads to poor models that exhibit a high degree of parametric or functional uncertainty. The situation becomes even more complex if the plant to be controlled is characterised by a multivalued function or even if it exhibits a number of modes of behaviour during its operation. Since an intelligent controller is expected to operate and guarantee the best performance where complexity and uncertainty coexist and interact, control engineers and theorists have recently developed new control techniques under the framework of intelligent control to enhance the performance of the controller for more complex and uncertain plants. These techniques are based on incorporating model uncertainty. The newly developed control algorithms for incorporating model uncertainty are proven to give more accurate control results under uncertain conditions. In this paper, we survey some approaches that appear to be promising for enhancing the performance of intelligent control systems in the face of higher levels of complexity and uncertainty

    Dual adaptive dynamic control of mobile robots using neural networks

    Get PDF
    This paper proposes two novel dual adaptive neural control schemes for the dynamic control of nonholonomic mobile robots. The two schemes are developed in discrete time, and the robot's nonlinear dynamic functions are assumed to be unknown. Gaussian radial basis function and sigmoidal multilayer perceptron neural networks are used for function approximation. In each scheme, the unknown network parameters are estimated stochastically in real time, and no preliminary offline neural network training is used. In contrast to other adaptive techniques hitherto proposed in the literature on mobile robots, the dual control laws presented in this paper do not rely on the heuristic certainty equivalence property but account for the uncertainty in the estimates. This results in a major improvement in tracking performance, despite the plant uncertainty and unmodeled dynamics. Monte Carlo simulation and statistical hypothesis testing are used to illustrate the effectiveness of the two proposed stochastic controllers as applied to the trajectory-tracking problem of a differentially driven wheeled mobile robot.peer-reviewe

    Adaptive, cautious, predictive control with Gaussian process priors

    Get PDF
    Nonparametric Gaussian Process models, a Bayesian statistics approach, are used to implement a nonlinear adaptive control law. Predictions, including propagation of the state uncertainty are made over a k-step horizon. The expected value of a quadratic cost function is minimised, over this prediction horizon, without ignoring the variance of the model predictions. The general method and its main features are illustrated on a simulation example

    PMAC:Probabilistic Multimodality Adaptive Control

    Get PDF
    This paper develops a probabilistic multimodal adaptive control approach for systems that are characterised by temporal multimodality where the system dynamics are subject to abrupt mode switching at arbitrary times. In this framework, the control objective is redefined such that it utilises the complete probability distribution of the system dynamics. The derived probabilistic control law is thus of a dual type that incorporates the functional uncertainty of the controlled system. A multi-modal density model with prediction error-dependent mixing coefficients is introduced to effect the mode switching. This approach can deal with arbitrary noise distributions, nonlinear plant dynamics and arbitrary mode switching. For the affine systems focussed upon for illustration in this paper the approach has global stability. The theoretical architecture constructs are verified by validation on a simulation example

    Fully probabilistic control for stochastic nonlinear control systems with input dependent noise

    Get PDF
    Robust controllers for nonlinear stochastic systems with functional uncertainties can be consistently designed using probabilistic control methods. In this paper a generalised probabilistic controller design for the minimisation of the Kullback-Leibler divergence between the actual joint probability density function (pdf) of the closed loop control system, and an ideal joint pdf is presented emphasising how the uncertainty can be systematically incorporated in the absence of reliable systems models. To achieve this objective all probabilistic models of the system are estimated from process data using mixture density networks (MDNs) where all the parameters of the estimated pdfs are taken to be state and control input dependent. Based on this dependency of the density parameters on the input values, explicit formulations to the construction of optimal generalised probabilistic controllers are obtained through the techniques of dynamic programming and adaptive critic methods. Using the proposed generalised probabilistic controller, the conditional joint pdfs can be made to follow the ideal ones. A simulation example is used to demonstrate the implementation of the algorithm and encouraging results are obtained

    Fully probabilistic control design in an adaptive critic framework

    Get PDF
    Optimal stochastic controller pushes the closed-loop behavior as close as possible to the desired one. The fully probabilistic design (FPD) uses probabilistic description of the desired closed loop and minimizes Kullback-Leibler divergence of the closed-loop description to the desired one. Practical exploitation of the fully probabilistic design control theory continues to be hindered by the computational complexities involved in numerically solving the associated stochastic dynamic programming problem. In particular very hard multivariate integration and an approximate interpolation of the involved multivariate functions. This paper proposes a new fully probabilistic contro algorithm that uses the adaptive critic methods to circumvent the need for explicitly evaluating the optimal value function, thereby dramatically reducing computational requirements. This is a main contribution of this short paper
    • ā€¦
    corecore