353 research outputs found

    A Survey of the Probability Density Function Control for Stochastic Dynamic Systems

    Get PDF
    Probability density function (PDF) control strategy investigates the controller design approaches in order to to realise a desirable distributions shape control of the random variables for the stochastic processes. Different from the existing stochastic optimisation and control methods, the most important problem of PDF control is to establish the evolution of the PDF expressions of the system variables. Once the relationship between the control input and the output PDF is formulated, the control objective can be described as obtaining the control input signals which would adjust the system output PDFs to follow the pre-specified target PDFs. This paper summarises the recent research results of the PDF control while the controller design approaches can be categorised into three groups: 1) system model-based direct evolution PDF control; 2) model-based distribution-transformation PDF control methods and 3) databased PDF control. In addition, minimum entropy control, PDF-based filter design, fault diagnosis and probabilistic decoupling design are also introduced briefly as extended applications in theory sense

    An introductory survey of probability density function control

    Get PDF
    YesProbability density function (PDF) control strategy investigates the controller design approaches where the random variables for the stochastic processes were adjusted to follow the desirable distributions. In other words, the shape of the system PDF can be regulated by controller design.Different from the existing stochastic optimization and control methods, the most important problem of PDF control is to establish the evolution of the PDF expressions of the system variables. Once the relationship between the control input and the output PDF is formulated, the control objective can be described as obtaining the control input signals which would adjust the system output PDFs to follow the pre-specified target PDFs. Motivated by the development of data-driven control and the state of the art PDF-based applications, this paper summarizes the recent research results of the PDF control while the controller design approaches can be categorized into three groups: (1) system model-based direct evolution PDF control; (2) model-based distribution-transformation PDF control methods and (3) data-based PDF control. In addition, minimum entropy control, PDF-based filter design, fault diagnosis and probabilistic decoupling design are also introduced briefly as extended applications in theory sense.De Montfort University - DMU HEIF’18 project, Natural Science Foundation of Shanxi Province [grant number 201701D221112], National Natural Science Foundation of China [grant numbers 61503271 and 61603136

    Soft-bound interval control system and its robust fault-tolerant controller design

    Get PDF
    A soft-bound interval control problem is proposed for general non-Gaussian systems with the aim to control the output variable within a bounded region at a specified probability level. To find a feasible solution to this challenging task, the initial soft-bound interval control problem has been transformed into an output probability density function (PDF) tracking control problem with constrained tracking errors, thereby the controller can be designed under the established framework of stochastic distribution control. Fault tolerant control (FTC) is investigated for soft-bound interval control systems in presence of faults. Three fault detection methods are proposed based on criteria extracted from the initial soft-bound control problem and the recast PDF tracking problem. An integrated design for fault estimation and FTC is proposed based on a double proportional integral structure. This integrated FTC design is developed through linear matrix inequality. Extensive simulation studies have been conducted to examine the key design factors, the implementation issues and the effectiveness of the proposed approach

    Discrete Time Systems

    Get PDF
    Discrete-Time Systems comprehend an important and broad research field. The consolidation of digital-based computational means in the present, pushes a technological tool into the field with a tremendous impact in areas like Control, Signal Processing, Communications, System Modelling and related Applications. This book attempts to give a scope in the wide area of Discrete-Time Systems. Their contents are grouped conveniently in sections according to significant areas, namely Filtering, Fixed and Adaptive Control Systems, Stability Problems and Miscellaneous Applications. We think that the contribution of the book enlarges the field of the Discrete-Time Systems with signification in the present state-of-the-art. Despite the vertiginous advance in the field, we also believe that the topics described here allow us also to look through some main tendencies in the next years in the research area

    Nonlinear Systems

    Get PDF
    Open Mathematics is a challenging notion for theoretical modeling, technical analysis, and numerical simulation in physics and mathematics, as well as in many other fields, as highly correlated nonlinear phenomena, evolving over a large range of time scales and length scales, control the underlying systems and processes in their spatiotemporal evolution. Indeed, available data, be they physical, biological, or financial, and technologically complex systems and stochastic systems, such as mechanical or electronic devices, can be managed from the same conceptual approach, both analytically and through computer simulation, using effective nonlinear dynamics methods. The aim of this Special Issue is to highlight papers that show the dynamics, control, optimization and applications of nonlinear systems. This has recently become an increasingly popular subject, with impressive growth concerning applications in engineering, economics, biology, and medicine, and can be considered a veritable contribution to the literature. Original papers relating to the objective presented above are especially welcome subjects. Potential topics include, but are not limited to: Stability analysis of discrete and continuous dynamical systems; Nonlinear dynamics in biological complex systems; Stability and stabilization of stochastic systems; Mathematical models in statistics and probability; Synchronization of oscillators and chaotic systems; Optimization methods of complex systems; Reliability modeling and system optimization; Computation and control over networked systems

    Advances in Evolutionary Algorithms

    Get PDF
    With the recent trends towards massive data sets and significant computational power, combined with evolutionary algorithmic advances evolutionary computation is becoming much more relevant to practice. Aim of the book is to present recent improvements, innovative ideas and concepts in a part of a huge EA field

    Advances in Robotics, Automation and Control

    Get PDF
    The book presents an excellent overview of the recent developments in the different areas of Robotics, Automation and Control. Through its 24 chapters, this book presents topics related to control and robot design; it also introduces new mathematical tools and techniques devoted to improve the system modeling and control. An important point is the use of rational agents and heuristic techniques to cope with the computational complexity required for controlling complex systems. Through this book, we also find navigation and vision algorithms, automatic handwritten comprehension and speech recognition systems that will be included in the next generation of productive systems developed by man

    Value Function Estimation in Optimal Control via Takagi-Sugeno Models and Linear Programming

    Full text link
    [ES] La presente Tesis emplea técnicas de programación dinámica y aprendizaje por refuerzo para el control de sistemas no lineales en espacios discretos y continuos. Inicialmente se realiza una revisión de los conceptos básicos de programación dinámica y aprendizaje por refuerzo para sistemas con un número finito de estados. Se analiza la extensión de estas técnicas mediante el uso de funciones de aproximación que permiten ampliar su aplicabilidad a sistemas con un gran número de estados o sistemas continuos. Las contribuciones de la Tesis son: -Se presenta una metodología que combina identificación y ajuste de la función Q, que incluye la identificación de un modelo Takagi-Sugeno, el cálculo de controladores subóptimos a partir de desigualdades matriciales lineales y el consiguiente ajuste basado en datos de la función Q a través de una optimización monotónica. -Se propone una metodología para el aprendizaje de controladores utilizando programación dinámica aproximada a través de programación lineal. La metodología hace que ADP-LP funcione en aplicaciones prácticas de control con estados y acciones continuos. La metodología propuesta estima una cota inferior y superior de la función de valor óptima a través de aproximadores funcionales. Se establecen pautas para los datos y la regularización de regresores con el fin de obtener resultados satisfactorios evitando soluciones no acotadas o mal condicionadas. -Se plantea una metodología bajo el enfoque de programación lineal aplicada a programación dinámica aproximada para obtener una mejor aproximación de la función de valor óptima en una determinada región del espacio de estados. La metodología propone aprender gradualmente una política utilizando datos disponibles sólo en la región de exploración. La exploración incrementa progresivamente la región de aprendizaje hasta obtener una política convergida.[CA] La present Tesi empra tècniques de programació dinàmica i aprenentatge per reforç per al control de sistemes no lineals en espais discrets i continus. Inicialment es realitza una revisió dels conceptes bàsics de programació dinàmica i aprenentatge per reforç per a sistemes amb un nombre finit d'estats. S'analitza l'extensió d'aquestes tècniques mitjançant l'ús de funcions d'aproximació que permeten ampliar la seua aplicabilitat a sistemes amb un gran nombre d'estats o sistemes continus. Les contribucions de la Tesi són: -Es presenta una metodologia que combina identificació i ajust de la funció Q, que inclou la identificació d'un model Takagi-Sugeno, el càlcul de controladors subòptims a partir de desigualtats matricials lineals i el consegüent ajust basat en dades de la funció Q a través d'una optimització monotónica. -Es proposa una metodologia per a l'aprenentatge de controladors utilitzant programació dinàmica aproximada a través de programació lineal. La metodologia fa que ADP-LP funcione en aplicacions pràctiques de control amb estats i accions continus. La metodologia proposada estima una cota inferior i superior de la funció de valor òptima a través de aproximadores funcionals. S'estableixen pautes per a les dades i la regularització de regresores amb la finalitat d'obtenir resultats satisfactoris evitant solucions no fitades o mal condicionades. -Es planteja una metodologia sota l'enfocament de programació lineal aplicada a programació dinàmica aproximada per a obtenir una millor aproximació de la funció de valor òptima en una determinada regió de l'espai d'estats. La metodologia proposa aprendre gradualment una política utilitzant dades disponibles només a la regió d'exploració. L'exploració incrementa progressivament la regió d'aprenentatge fins a obtenir una política convergida.[EN] The present Thesis employs dynamic programming and reinforcement learning techniques in order to obtain optimal policies for controlling nonlinear systems with discrete and continuous states and actions. Initially, a review of the basic concepts of dynamic programming and reinforcement learning is carried out for systems with a finite number of states. After that, the extension of these techniques to systems with a large number of states or continuous state systems is analysed using approximation functions. The contributions of the Thesis are: -A combined identification/Q-function fitting methodology, which involves identification of a Takagi-Sugeno model, computation of (sub)optimal controllers from Linear Matrix Inequalities, and the subsequent data-based fitting of Q-function via monotonic optimisation. -A methodology for learning controllers using approximate dynamic programming via linear programming is presented. The methodology makes that ADP-LP approach can work in practical control applications with continuous state and input spaces. The proposed methodology estimates a lower bound and upper bound of the optimal value function through functional approximators. Guidelines are provided for data and regressor regularisation in order to obtain satisfactory results avoiding unbounded or ill-conditioned solutions. -A methodology of approximate dynamic programming via linear programming in order to obtain a better approximation of the optimal value function in a specific region of state space. The methodology proposes to gradually learn a policy using data available only in the exploration region. The exploration progressively increases the learning region until a converged policy is obtained.This work was supported by the National Department of Higher Education, Science, Technology and Innovation of Ecuador (SENESCYT), and the Spanish ministry of Economy and European Union, grant DPI2016-81002-R (AEI/FEDER,UE). The author also received the grant for a predoctoral stay, Programa de Becas Iberoamérica- Santander Investigación 2018, of the Santander Bank.Díaz Iza, HP. (2020). Value Function Estimation in Optimal Control via Takagi-Sugeno Models and Linear Programming [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/139135TESI
    corecore