93,695 research outputs found

    Neurodynamic approaches to model predictive control.

    Get PDF
    Pan, Yunpeng.Thesis (M.Phil.)--Chinese University of Hong Kong, 2009.Includes bibliographical references (p. 98-107).Abstract also in Chinese.Abstract --- p.ip.iiiAcknowledgement --- p.ivChapter 1 --- Introduction --- p.2Chapter 1.1 --- Model Predictive Control --- p.2Chapter 1.2 --- Neural Networks --- p.3Chapter 1.3 --- Existing studies --- p.6Chapter 1.4 --- Thesis structure --- p.7Chapter 2 --- Two Recurrent Neural Networks Approaches to Linear Model Predictive Control --- p.9Chapter 2.1 --- Problem Formulation --- p.9Chapter 2.1.1 --- Quadratic Programming Formulation --- p.10Chapter 2.1.2 --- Linear Programming Formulation --- p.13Chapter 2.2 --- Neural Network Approaches --- p.15Chapter 2.2.1 --- Neural Network Model 1 --- p.15Chapter 2.2.2 --- Neural Network Model 2 --- p.16Chapter 2.2.3 --- Control Scheme --- p.17Chapter 2.3 --- Simulation Results --- p.18Chapter 3 --- Model Predictive Control for Nonlinear Affine Systems Based on the Simplified Dual Neural Network --- p.22Chapter 3.1 --- Problem Formulation --- p.22Chapter 3.2 --- A Neural Network Approach --- p.25Chapter 3.2.1 --- The Simplified Dual Network --- p.26Chapter 3.2.2 --- RNN-based MPC Scheme --- p.28Chapter 3.3 --- Simulation Results --- p.28Chapter 3.3.1 --- Example 1 --- p.28Chapter 3.3.2 --- Example 2 --- p.29Chapter 3.3.3 --- Example 3 --- p.33Chapter 4 --- Nonlinear Model Predictive Control Using a Recurrent Neural Network --- p.36Chapter 4.1 --- Problem Formulation --- p.36Chapter 4.2 --- A Recurrent Neural Network Approach --- p.40Chapter 4.2.1 --- Neural Network Model --- p.40Chapter 4.2.2 --- Learning Algorithm --- p.41Chapter 4.2.3 --- Control Scheme --- p.41Chapter 4.3 --- Application to Mobile Robot Tracking --- p.42Chapter 4.3.1 --- Example 1 --- p.44Chapter 4.3/2 --- Example 2 --- p.44Chapter 4.3.3 --- Example 3 --- p.46Chapter 4.3.4 --- Example 4 --- p.48Chapter 5 --- Model Predictive Control of Unknown Nonlinear Dynamic Sys- tems Based on Recurrent Neural Networks --- p.50Chapter 5.1 --- MPC System Description --- p.51Chapter 5.1.1 --- Model Predictive Control --- p.51Chapter 5.1.2 --- Dynamical System Identification --- p.52Chapter 5.2 --- Problem Formulation --- p.54Chapter 5.3 --- Dynamic Optimization --- p.58Chapter 5.3.1 --- The Simplified Dual Neural Network --- p.59Chapter 5.3.2 --- A Recursive Learning Algorithm --- p.60Chapter 5.3.3 --- Convergence Analysis --- p.61Chapter 5.4 --- RNN-based MPC Scheme --- p.65Chapter 5.5 --- Simulation Results --- p.67Chapter 5.5.1 --- Example 1 --- p.67Chapter 5.5.2 --- Example 2 --- p.68Chapter 5.5.3 --- Example 3 --- p.76Chapter 6 --- Model Predictive Control for Systems With Bounded Uncertainties Using a Discrete-Time Recurrent Neural Network --- p.81Chapter 6.1 --- Problem Formulation --- p.82Chapter 6.1.1 --- Process Model --- p.82Chapter 6.1.2 --- Robust. MPC Design --- p.82Chapter 6.2 --- Recurrent Neural Network Approach --- p.86Chapter 6.2.1 --- Neural Network Model --- p.86Chapter 6.2.2 --- Convergence Analysis --- p.88Chapter 6.2.3 --- Control Scheme --- p.90Chapter 6.3 --- Simulation Results --- p.91Chapter 7 --- Summary and future works --- p.95Chapter 7.1 --- Summary --- p.95Chapter 7.2 --- Future works --- p.96Bibliography --- p.9

    Learning High-Level Policies for Model Predictive Control

    Full text link
    The combination of policy search and deep neural networks holds the promise of automating a variety of decision-making tasks. Model Predictive Control~(MPC) provides robust solutions to robot control tasks by making use of a dynamical model of the system and solving an optimization problem online over a short planning horizon. In this work, we leverage probabilistic decision-making approaches and the generalization capability of artificial neural networks to the powerful online optimization by learning a deep high-level policy for the MPC~(High-MPC). Conditioning on robot's local observations, the trained neural network policy is capable of adaptively selecting high-level decision variables for the low-level MPC controller, which then generates optimal control commands for the robot. First, we formulate the search of high-level decision variables for MPC as a policy search problem, specifically, a probabilistic inference problem. The problem can be solved in a closed-form solution. Second, we propose a self-supervised learning algorithm for learning a neural network high-level policy, which is useful for online hyperparameter adaptations in highly dynamic environments. We demonstrate the importance of incorporating the online adaption into autonomous robots by using the proposed method to solve a challenging control problem, where the task is to control a simulated quadrotor to fly through a swinging gate. We show that our approach can handle situations that are difficult for standard MPC

    Neural Network Based Min-Max Predictive Control. Application to a Heat Exchanger

    Get PDF
    IFAC Adaptation and Learning in Control and Signal Processing. Cemobbio-Como. Italy. 2001Min-max model predictive controllers (MMMPC) have been proposed for the control of linear plants subject to bounded uncertainties. The implementation of MMMPC suffers a large computational burden due to the numerical optimization problem that has to be solved at every sampling time. This fact severely limits the class of processes in which this control is suitable. In this paper the use of a Neural Network (NN) to approximate the solution of the min-max problem is proposed. The number of inputs of the NN is determined by the order and time delay of the model together with the control horizon. For large time delays the number of inputs can be prohibitive. A modification to the basic formulation is proposed in order to avoid this later problem. Simulation and experimental results are given using a heat exchanger
    • …
    corecore