2,028 research outputs found

    Neurodynamic approaches to model predictive control.

    Get PDF
    Pan, Yunpeng.Thesis (M.Phil.)--Chinese University of Hong Kong, 2009.Includes bibliographical references (p. 98-107).Abstract also in Chinese.Abstract --- p.ip.iiiAcknowledgement --- p.ivChapter 1 --- Introduction --- p.2Chapter 1.1 --- Model Predictive Control --- p.2Chapter 1.2 --- Neural Networks --- p.3Chapter 1.3 --- Existing studies --- p.6Chapter 1.4 --- Thesis structure --- p.7Chapter 2 --- Two Recurrent Neural Networks Approaches to Linear Model Predictive Control --- p.9Chapter 2.1 --- Problem Formulation --- p.9Chapter 2.1.1 --- Quadratic Programming Formulation --- p.10Chapter 2.1.2 --- Linear Programming Formulation --- p.13Chapter 2.2 --- Neural Network Approaches --- p.15Chapter 2.2.1 --- Neural Network Model 1 --- p.15Chapter 2.2.2 --- Neural Network Model 2 --- p.16Chapter 2.2.3 --- Control Scheme --- p.17Chapter 2.3 --- Simulation Results --- p.18Chapter 3 --- Model Predictive Control for Nonlinear Affine Systems Based on the Simplified Dual Neural Network --- p.22Chapter 3.1 --- Problem Formulation --- p.22Chapter 3.2 --- A Neural Network Approach --- p.25Chapter 3.2.1 --- The Simplified Dual Network --- p.26Chapter 3.2.2 --- RNN-based MPC Scheme --- p.28Chapter 3.3 --- Simulation Results --- p.28Chapter 3.3.1 --- Example 1 --- p.28Chapter 3.3.2 --- Example 2 --- p.29Chapter 3.3.3 --- Example 3 --- p.33Chapter 4 --- Nonlinear Model Predictive Control Using a Recurrent Neural Network --- p.36Chapter 4.1 --- Problem Formulation --- p.36Chapter 4.2 --- A Recurrent Neural Network Approach --- p.40Chapter 4.2.1 --- Neural Network Model --- p.40Chapter 4.2.2 --- Learning Algorithm --- p.41Chapter 4.2.3 --- Control Scheme --- p.41Chapter 4.3 --- Application to Mobile Robot Tracking --- p.42Chapter 4.3.1 --- Example 1 --- p.44Chapter 4.3/2 --- Example 2 --- p.44Chapter 4.3.3 --- Example 3 --- p.46Chapter 4.3.4 --- Example 4 --- p.48Chapter 5 --- Model Predictive Control of Unknown Nonlinear Dynamic Sys- tems Based on Recurrent Neural Networks --- p.50Chapter 5.1 --- MPC System Description --- p.51Chapter 5.1.1 --- Model Predictive Control --- p.51Chapter 5.1.2 --- Dynamical System Identification --- p.52Chapter 5.2 --- Problem Formulation --- p.54Chapter 5.3 --- Dynamic Optimization --- p.58Chapter 5.3.1 --- The Simplified Dual Neural Network --- p.59Chapter 5.3.2 --- A Recursive Learning Algorithm --- p.60Chapter 5.3.3 --- Convergence Analysis --- p.61Chapter 5.4 --- RNN-based MPC Scheme --- p.65Chapter 5.5 --- Simulation Results --- p.67Chapter 5.5.1 --- Example 1 --- p.67Chapter 5.5.2 --- Example 2 --- p.68Chapter 5.5.3 --- Example 3 --- p.76Chapter 6 --- Model Predictive Control for Systems With Bounded Uncertainties Using a Discrete-Time Recurrent Neural Network --- p.81Chapter 6.1 --- Problem Formulation --- p.82Chapter 6.1.1 --- Process Model --- p.82Chapter 6.1.2 --- Robust. MPC Design --- p.82Chapter 6.2 --- Recurrent Neural Network Approach --- p.86Chapter 6.2.1 --- Neural Network Model --- p.86Chapter 6.2.2 --- Convergence Analysis --- p.88Chapter 6.2.3 --- Control Scheme --- p.90Chapter 6.3 --- Simulation Results --- p.91Chapter 7 --- Summary and future works --- p.95Chapter 7.1 --- Summary --- p.95Chapter 7.2 --- Future works --- p.96Bibliography --- p.9

    Fast Non-Parametric Learning to Accelerate Mixed-Integer Programming for Online Hybrid Model Predictive Control

    Full text link
    Today's fast linear algebra and numerical optimization tools have pushed the frontier of model predictive control (MPC) forward, to the efficient control of highly nonlinear and hybrid systems. The field of hybrid MPC has demonstrated that exact optimal control law can be computed, e.g., by mixed-integer programming (MIP) under piecewise-affine (PWA) system models. Despite the elegant theory, online solving hybrid MPC is still out of reach for many applications. We aim to speed up MIP by combining geometric insights from hybrid MPC, a simple-yet-effective learning algorithm, and MIP warm start techniques. Following a line of work in approximate explicit MPC, the proposed learning-control algorithm, LNMS, gains computational advantage over MIP at little cost and is straightforward for practitioners to implement

    Constrained Deep Learning-based Model Predictive Control with Improved Constraint Satisfaction

    Full text link
    Machine learning technique can help reduce computational cost of model predictive control (MPC). In this paper, a constrained deep neural networks design is proposed to learn and construct MPC policies for nonlinear input affine dynamic systems. Using constrained training of neural networks helps enforce MPC constraints effectively. We show the asymptotic stability of the learned policies. Additionally, different data sampling strategies are compared in terms of their generalization errors on the learned policy. Furthermore, probabilistic feasibility and optimality guarantees are provided for the learned control policy. The proposed algorithm is implemented on a rotary inverted pendulum experimentally and control performance is demonstrated and compared with the exact MPC and the normally trained learning MPC. The results show that the proposed algorithm improves constraint satisfaction while preserves computational efficiency of the learned policy

    Constrained Reinforcement Learning using Distributional Representation for Trustworthy Quadrotor UAV Tracking Control

    Full text link
    Simultaneously accurate and reliable tracking control for quadrotors in complex dynamic environments is challenging. As aerodynamics derived from drag forces and moment variations are chaotic and difficult to precisely identify, most current quadrotor tracking systems treat them as simple `disturbances' in conventional control approaches. We propose a novel, interpretable trajectory tracker integrating a Distributional Reinforcement Learning disturbance estimator for unknown aerodynamic effects with a Stochastic Model Predictive Controller (SMPC). The proposed estimator `Constrained Distributional Reinforced disturbance estimator' (ConsDRED) accurately identifies uncertainties between true and estimated values of aerodynamic effects. Simplified Affine Disturbance Feedback is used for control parameterization to guarantee convexity, which we then integrate with a SMPC. We theoretically guarantee that ConsDRED achieves at least an optimal global convergence rate and a certain sublinear rate if constraints are violated with an error decreases as the width and the layer of neural network increase. To demonstrate practicality, we show convergent training in simulation and real-world experiments, and empirically verify that ConsDRED is less sensitive to hyperparameter settings compared with canonical constrained RL approaches. We demonstrate our system improves accumulative tracking errors by at least 70% compared with the recent art. Importantly, the proposed framework, ConsDRED-SMPC, balances the tradeoff between pursuing high performance and obeying conservative constraints for practical implementationsComment: 16 pages, 8 figures. arXiv admin note: substantial text overlap with arXiv:2205.0715

    A brief review of neural networks based learning and control and their applications for robots

    Get PDF
    As an imitation of the biological nervous systems, neural networks (NN), which are characterized with powerful learning ability, have been employed in a wide range of applications, such as control of complex nonlinear systems, optimization, system identification and patterns recognition etc. This article aims to bring a brief review of the state-of-art NN for the complex nonlinear systems. Recent progresses of NNs in both theoretical developments and practical applications are investigated and surveyed. Specifically, NN based robot learning and control applications were further reviewed, including NN based robot manipulator control, NN based human robot interaction and NN based behavior recognition and generation

    Stability Verification of Neural Network Controllers using Mixed-Integer Programming

    Full text link
    We propose a framework for the stability verification of Mixed-Integer Linear Programming (MILP) representable control policies. This framework compares a fixed candidate policy, which admits an efficient parameterization and can be evaluated at a low computational cost, against a fixed baseline policy, which is known to be stable but expensive to evaluate. We provide sufficient conditions for the closed-loop stability of the candidate policy in terms of the worst-case approximation error with respect to the baseline policy, and we show that these conditions can be checked by solving a Mixed-Integer Quadratic Program (MIQP). Additionally, we demonstrate that an outer and inner approximation of the stability region of the candidate policy can be computed by solving an MILP. The proposed framework is sufficiently general to accommodate a broad range of candidate policies including ReLU Neural Networks (NNs), optimal solution maps of parametric quadratic programs, and Model Predictive Control (MPC) policies. We also present an open-source toolbox in Python based on the proposed framework, which allows for the easy verification of custom NN architectures and MPC formulations. We showcase the flexibility and reliability of our framework in the context of a DC-DC power converter case study and investigate its computational complexity
    • …
    corecore