4 research outputs found

    Constructing Parsimonious Analytic Models for Dynamic Systems via Symbolic Regression

    Full text link
    Developing mathematical models of dynamic systems is central to many disciplines of engineering and science. Models facilitate simulations, analysis of the system's behavior, decision making and design of automatic control algorithms. Even inherently model-free control techniques such as reinforcement learning (RL) have been shown to benefit from the use of models, typically learned online. Any model construction method must address the tradeoff between the accuracy of the model and its complexity, which is difficult to strike. In this paper, we propose to employ symbolic regression (SR) to construct parsimonious process models described by analytic equations. We have equipped our method with two different state-of-the-art SR algorithms which automatically search for equations that fit the measured data: Single Node Genetic Programming (SNGP) and Multi-Gene Genetic Programming (MGGP). In addition to the standard problem formulation in the state-space domain, we show how the method can also be applied to input-output models of the NARX (nonlinear autoregressive with exogenous input) type. We present the approach on three simulated examples with up to 14-dimensional state space: an inverted pendulum, a mobile robot, and a bipedal walking robot. A comparison with deep neural networks and local linear regression shows that SR in most cases outperforms these commonly used alternative methods. We demonstrate on a real pendulum system that the analytic model found enables a RL controller to successfully perform the swing-up task, based on a model constructed from only 100 data samples

    Stability guarantees for nonlinear discrete-time systems controlled by approximate value iteration

    Get PDF
    Version longue de l'article du même titre et des mêmes auteurs des proceedings de l'IEEE Conference on Decision on Control 2019, Nice, France.International audienceValue iteration is a method to generate optimal control inputs for generic nonlinear systems and cost functions. Its implementation typically leads to approximation errors, which may have a major impact on the closed-loop system performance. We talk in this case of approximate value iteration (AVI). In this paper, we investigate the stability of systems for which the inputs are obtained by AVI. We consider deter-ministic discrete-time nonlinear plants and a class of general, possibly discounted, costs. We model the closed-loop system as a family of systems parameterized by tunable parameters, which are used for the approximation of the value function at different iterations, the discount factor and the iteration step at which we stop running the algorithm. It is shown, under natural stabilizability and detectability properties as well as mild conditions on the approximation errors, that the family of closed-loop systems exhibit local practical stability properties. The analysis is based on the construction of a Lyapunov function given by the sum of the approximate value function and the Lyapunov-like function that characterizes the detectability of the system. By strengthening our conditions, asymptotic and exponential stability properties are guaranteed

    Volume-weighted Bellman error method for adaptive meshing in approximate dynamic programming

    Full text link
    [EN] Optimal control and reinforcement learning have an associate “value function” which must be suitably approximated. Value function approximation problems usually have different precision requirements in different regions of the state space. An uniform gridding wastes resources in regions in which the value function is smooth, and, on the other hand, has not enough resolution in zones with abrupt changes. The present work proposes an adaptive meshing methodology in order to adapt to these changing requirements without incrementing too much the number of parameters of the approximator. The proposal is based on simplicial meshes and Bellman error, with a criteria to add and remove points from the mesh: modifications to proposals in earlier literature including the volume of the affected simplices are proposed, alongside with methods to manipulate the mesh triangulation.[ES] El control óptimo y aprendizaje por refuerzo lleva asociada una "función de valor'' que debe ser adecuadamente aproximada. Estos problemas de aproximar funciones de valor tienen, usualmente, diferentes requerimientos de precisión en diferentes regiones del espacio de estados. Un mallado uniforme tiene problemas porque desperdicia recursos en regiones en las que la función de valor es suave, mientras que no tiene la suficiente resolución en zonas con grandes cambios en dicha función.  El presente trabajo propone una metodología de programación dinámica aproximada con mallado adaptativo, para poder adaptarse a dichos requerimientos cambiantes sin incrementar en exceso el número de parámetros del aproximador. La propuesta se basa en mallados simpliciales y en el error en la ecuación de Bellman con un criterios para añadir y quitar puntos del mallado: se modificarán propuestas de la literatura incluyendo el volumen de los símplices afectados en los criterios, y se detallarán las manipulaciones de la triangulación necesarias.Este artículo ha sido financiado por la Agencia Española de Investigación mediante el proyecto del Plan Nacional PID2020-116585GB-I00.Armesto, L.; Sala, A. (2021). Método de error de Bellman con ponderación de volumen para mallado adaptativo en programación dinámica aproximada. Revista Iberoamericana de Automática e Informática industrial. 19(1):37-47. https://doi.org/10.4995/riai.2021.15698OJS3747191Albertos, P., Sala, A., 2006. Multivariable control systems: an engineering approach. Springer, London, U.K.Allgower, F., Zheng, A., 2012. Nonlinear model predictive control.Antos, A., Szepesvári, C., Munos, R., 2008. Learning near optimal policies with bellman-residual minimization based fitted policy iteration and a single sample path. Machine Learning 71 (1), 89-129. https://doi.org/10.1007/s10994-007-5038-2Ariño, C., Pérez, E., Querol, A., Sala, A., 2014. Model predictive control for discrete fuzzy systems via iterative quadratic programming. In: Fuzzy Systems (FUZZ-IEEE), 2014 IEEE International Conference on. IEEE, pp. 2288-2293. https://doi.org/10.1109/FUZZ-IEEE.2014.6891633Ariño, C., Pérez, E., Sala, A., 2010. Guaranteed cost control analysis and iterative design for constrained takagi-sugeno systems. Engineering Applications of Artificial Intelligence 23 (8), 1420-1427. https://doi.org/10.1016/j.engappai.2010.03.004Armesto, L., Girbés, V., Sala, A., Zima, M., Smídl, V., 2015. Duality-based nonlinear quadratic control: Application to mobile robot trajectory-following. IEEE Transactions on Control Systems Technology 23 (4), 1494-1504. https://doi.org/10.1109/TCST.2014.2377631Athans, M., Falb, P. L., 2013. Optimal control: an introduction to the theory and its applications. Courier Corporation.Bertsekas, D. P., 2018. Abstract dynamic programming. Athena Scientific.Bertsekas, D. P., Tsitsiklis, J. N., 1996. Neuro-Dynamic Programming. Athena Scientific, Belmont, MA, USA.Busoniu, L., Babuska, R., De Schutter, B., Ernst, D., 2010. Reinforcement learning and dynamic programming using function approximators. CRC press, Boca Raton, FL, USA.Busoniu, L., Ernst, D., De Schutter, B., Babuska, R., 2010. Approximate dynamic programming with a fuzzy parameterization. Automatica 46 (5), 804-814. https://doi.org/10.1016/j.automatica.2010.02.006Camacho, E. F., Bordons, C., 2010. Control predictivo: Pasado, presente y futuro. Revista Iberoamericana de Automática e Informática Industrial 1 (3), 5-28.De Farias, D. P., Van Roy, B., 2003. The linear programming approach to approximate dynamic programming. Operations research 51 (6), 850-865. https://doi.org/10.1287/opre.51.6.850.24925Deisenroth, M. P., Neumann, G., Peters, J., et al., 2013. A survey on policy search for robotics. Foundations and Trends in Robotics 2 (1-2), 1-142. https://doi.org/10.1561/2300000021Díaz, H., Armesto, L., Sala, A., 2019. Metodología de programación dinámica aproximada para control óptimo basada en datos. Revista Iberoamericana de Automática e Informática industrial 16 (3), 273-283. https://doi.org/10.4995/riai.2019.10379Díaz, H., Armesto, L., Sala, A., 3 2020. Fitted Q-function control methodology based on takagi-sugeno systems. IEEE Transactions on Control Systems Technology 28 (2), 477-488. https://doi.org/10.1109/TCST.2018.2885689Díaz, H., Sala, A., Armesto, L., 2020. A linear programming methodology for approximate dynamic programming. International Journal of Applied Mathematics and Computer Science 30 (2).Duarte-Mermoud, M., Milla, F., 2018. Estabilizador de sistemas de potencia usando control predictivo basado en modelo. Revista Iberoamericana de Automática e Informática industrial. https://doi.org/10.4995/riai.2018.10056Fairbank, M., Alonso, E., 6 2012. The divergence of reinforcement learning algorithms with value-iteration and function approximation. In: The 2012 International Joint Conference on Neural Networks (IJCNN). pp. 1-8. https://doi.org/10.1109/IJCNN.2012.6252792Grüne, L., 1997. An adaptive grid scheme for the discrete hamilton-jacobibellman equation. Numerische Mathematik 75, 319-337. https://doi.org/10.1007/s002110050241Hornik, K., Stinchcombe, M., White, H., 1989. Multilayer feedforward networks are universal approximators. Neural Networks 2 (5), 359 - 366. https://doi.org/10.1016/0893-6080(89)90020-8Inc, T. M., 2021. Matlab delaunay documentation. URL: https://www.mathworks.com/help/matlab/ref/delaunay.htmlLewis, F. L., Liu, D., 2013. Reinforcement learning and approximate dynamic programming for feedback control. Wiley, Hoboken, NJ, USA.https://doi.org/10.1002/9781118453988Lewis, F. L., Vrabie, D., 2009. Reinforcement learning and adaptive dynamic programming for feedback control. Circuits and Systems Magazine, IEEE 9 (3), 32-50. https://doi.org/10.1109/MCAS.2009.933854Li, W., Todorov, E., 2007. Iterative linearization methods for approximately optimal control and estimation of non-linear stochastic system. International Journal of Control 80 (9), 1439-1453. https://doi.org/10.1080/00207170701364913Liberzon, D., 2011. Calculus of variations and optimal control theory: a concise introduction. Princeton university press. https://doi.org/10.2307/j.ctvcm4g0sMunos, R., Moore, A., 2002. Variable resolution discretization in optimal control. Machine learning 49 (2-3), 291-323. https://doi.org/10.1023/A:1017992615625Rubio, F. R., Navas, S. J., Ollero, P., Lemos, J. M., Ortega, M. G., 2018. Control óptimo aplicado a campos de colectores solares distribuidos. Revista Iberoamericana de Automática e Informática industrial.Santos, M., 2011. Un enfoque aplicado del control inteligente. Revista Iberoamericana de Automática e Informática Industrial RIAI 8 (4), 283-296. https://doi.org/10.1016/j.riai.2011.09.016Sherstov, A. A., Stone, P., 2005. Function approximation via tile coding: Automating parameter choice. In: International Symposium on Abstraction, Reformulation, and Approximation. Springer, pp. 194-205. https://doi.org/10.1007/11527862_14Sutton, R. S., Barto, A. G., 1998. Reinforcement learning: An introduction. Vol. 1. MIT press Cambridge.Ziogou, C., Papadopoulou, S., Georgiadis, M. C., Voutetakis, S., 2013. On-line nonlinear model predictive control of a pem fuel cell system. Journal of Process Control 23 (4), 483-492. https://doi.org/10.1016/j.jprocont.2013.01.01

    Approximate dynamic programming with a fuzzy parameterization

    Full text link
    Dynamic programming (DP) is a powerful paradigm for general, nonlinear optimal control. Computing exact DP solutions is in general only possible when the process states and the control actions take values in a small discrete set. In practice, it is necessary to approximate the solutions. Therefore, we propose an algorithm for approximate DP that relies on a fuzzy partition of the state space, and on a discretization of the action space. This fuzzy Q-iteration algorithm works for deterministic processes, under the discounted return criterion. We prove that fuzzy Q-iteration asymptotically converges to a solution that lies within a bound of the optimal solution. A bound on the suboptimality of the solution obtained in a finite number of iterations is also derived. Under continuity assumptions on the dynamics and on the reward function, we show that fuzzy Q-iteration is consistent, i.e., that it asymptotically obtains the optimal solution as the approximation accuracy increases. These properties hold both when the parameters of the approximator are updated in a synchronous fashion, and when they are updated asynchronously. The asynchronous algorithm is proven to converge at least as fast as the synchronous one. The performance of fuzzy Q-iteration is illustrated in a two-link manipulator control problem
    corecore