9 research outputs found

    Lyapunov Criterion for Stochastic Systems and Its Applications in Distributed Computation

    Get PDF
    This paper presents new sufficient conditions for convergence and asymptotic or exponential stability of a stochastic discrete-time system, under which the constructed Lyapunov function always decreases in expectation along the system's solutions after a finite number of steps, but without necessarily strict decrease at every step, in contrast to the classical stochastic Lyapunov theory. As the first application of this new Lyapunov criterion, we look at the product of any random sequence of stochastic matrices, including those with zero diagonal entries, and obtain sufficient conditions to ensure the product almost surely converges to a matrix with identical rows; we also show that the rate of convergence can be exponential under additional conditions. As the second application, we study a distributed network algorithm for solving linear algebraic equations. We relax existing conditions on the network structures, while still guaranteeing the equations are solved asymptotically.Comment: 14 pages, 1 figur

    Normas e estabilidade para modelos estocásticos cuja variação do controle e do estado aumentam a incerteza

    Get PDF
    Orientador: João Bosco Ribeiro do ValDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: Essa dissertação de mestrado gira em torno da discussão sobre controle de sistemas incertos. Modelos matemáticos utilizados como base para o design de controladores automáticos são naturalmente uma representação aproximada do sistema real, o que, em conjunto com perturbações externas e dinâmica não modelada, gera incertezas a respeito dos sistemas estudados. Na literatura de controle, este tema vêm sendo discutido frequentemente, em particular nas sub-áreas de controle estocástico e controle robusto. Dentre as técnicas desenvolvidas dentro da teoria de controle estocástico, uma proposta recente se diferencia das demais por basear-se na idéia de que variações abruptas na política de controle possam acarretar em maiores incertezas a respeito do sistema. Matematicamente, essa noção é representada pelo uso de um ruído estocástico dependente do módulo da ação de controle, e a técnica foi apelidada de VCAI - acrônimo para variação do controle aumenta a incerteza. A definição da política de controle ótima correspondente, obtida por meio do método de programação dinâmica, mostra a existência de uma região ao redor do ponto de equilíbrio para a qual a política ótima é manter a ação de controle do equilíbrio inalterada, um resultado que parece particular à abordagem VCAI, mas que pode ser relacionado a políticas de gerenciamento cautelosas em áreas como economia e biologia. O problema de controle ótimo VCAI foi anteriormente resolvido ao adotar-se um critério de custo quadrático descontado e um horizonte de otimização infinito, e nessa dissertação nós utilizamos essa solução para atacar o problema de custo médio a longo prazo. Dada certa semelhança entre a estrutura do ruído estocástico na abordavem VCAI e modelos utilizados na teoria de controle robusto, discutimos ainda possíveis relações entre a abordagem proposta e controladores robustos. Discutimos ainda algumas possíveis aplicações do modelo propostoAbstract: This work discusses a new approach to the control of uncertain systems. Uncertain systems and their representation is a recurrent theme in control theory: approximate mathematical models, unmodeled dynamics and external disturbances are all sources of uncertainties in automated systems, and the topic has been extensively studied in the control literature, particularly within the stochastic and robust control research areas. Within the stochastic framework, a recent approach, named CVIU - control variation increases uncertainty, for short -, was recently proposed. The approach differs from previous models for assuming that a control action might actually increase the uncertainty about an unknown system, a notion represented by the use of stochastic noise depending on the absolute value of the control input. Moreover, the solution of the corresponding stochastic optimal control problem shows the existence of a region around the equilibrium point in which the optimal action is to keep the equilibrium control action unchanged. The CVIU control problem was previously solved by adopting a discounted quadratic cost formulation, and in this work we extend this previous result and study the corresponding long run average control problem. We also discuss possible relations between the CVIU approach and models from robust control theory, and present some potential applications of the theory presented hereMestradoAutomaçãoMestre em Engenharia Elétrica2016/02208-6, 2017/10340-4FAPES

    Extended generator and almost sure stability for degenerate diffusion processes

    Get PDF
    Orientador: João Bosco Ribeiro do ValTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: A principal contribuição desta tese é o desenvolvimento de um gerador estendido para processos de difusão em forma explícita e que está associado à existência de uma solução de viscosidade de uma equação de Hamilton-Jacobi-Bellman (HJB), no sentido que tal solução pertence ao domínio do gerador. Tem-se interesse pelos processos de difusão Markovianos, na medida em que a fórmula de Dynkin permite o cálculo em valor esperado da evolução dos funcionais não regulares, que não podem ser tratados adequadamente pelo cálculo de Itô. Tal caracterização aplica-se em uma nova abordagem para lidar com a estabilidade quase certa (q.c.) na forma de recorrência finita para problemas de difusão invariantes ao longo do tempo afetados por um ruído persistente. A técnica proposta baseia-se nos resultados apresentados pelo método de Kushner-Khasminskii na forma clássica de estabilidade para a origem e na abordagem de Meyn para a caracterização do gerador estendido. Para sistemas semilineares com coeficiente de difusão convexo, esta tese mostra que a solução da equação HJB é uma função convexa quando o custo de operação é convexo. Assim, obtém-se uma função de Lyapunov a partir da solução ótima de uma maneira direta e, dessa forma, unindo otimalidade e estabilidade. As noções de estabilidade associadas a convergência para um conjunto compacto são exploradas e soluções estáveis sub-ótimas podem ser obtidas pela imposição de uma função de Lyapunov não regular. Nesse contexto, existe um panorama pouco explorado na literatura no qual surgem possíveis aplicações, e que serão apresentadas na forma de exemplos nesta teseAbstract: The thesis develops an extended generator for diffusion processes in explicit form that is associated to the existence of viscosity solution of Hamilton-Jacobi-Bellman (HJB) equations, in the sense that such a solution belongs to the generator domain. It has a direct interest for Markovian diffusion processes, insofar Dynkin's formula allows the expected value calculus for the evolution of nonsmooth functionals, which cannot be handled by Itô's calculus. Such a characterization applies in a novel approach to deal with the almost sure stability in form of finite recurrence for the solution of long run time invariant diffusion problems with persistent noise. The approach builds upon the results given by the Kushner-Khasminskii's approach in the classic stability to the origin method and on Meyn's approach for the extended generator characterization. For semilinear systems with convex diffusion coefficient, the thesis shows that the solution of the HJB is a convex function when the running cost also is, thus pointing out a Lyapunov function from the optimal solution, bridging optimality with stability. Notions of stability associated to convergence to a compact set is explored and the idea that suboptimal stable solutions can be obtained by imposing nonsmooth Lyapunov functions exemplifies possible applicationsDoutoradoAutomaçãoDoutor em Engenharia Elétrica1408610CAPE

    A Partial History of the Early Development of Continuous-Time Nonlinear Stochastic Systems Theory

    No full text
    The late 1950’s throughout the mid 1970’s were a period of renaissance in control theory. The classical theory, largely based on linear systems, Fourier and Laplace transform methods, and stability based on Bode and Nyquist plots, and Routh-Hurwitz criteria, was very successful, and provided the foundation

    A partial history of the early development of continuous-time nonlinear stochastic systems theory

    No full text

    Robust stability theory for stochastic dynamical systems

    Get PDF
    In this work, we focus on developing analysis tools related to stability theory forcertain classes of stochastic dynamical systems that permit non-unique solutions. Thenon-unique nature of solutions arise primarily due to the system dynamics that aremodeled by set-valued mappings. There are two main motivations for studying suchclasses of systems. Firstly, understanding such systems is crucial to developing a robuststability theory. Secondly, such system models allow flexibility in control design problems.We begin by developing analysis tools for a simple class of discrete-time stochasticsystem modeled by set-valued maps and then extend the results to a larger class ofstochastic hybrid systems. Stochastic hybrid systems are a class of dynamical systemsthat combine continuous-time dynamics, discrete-time dynamics and randomness. Theanalysis tools are established for properties like global asymptotic stability in probabilityand global recurrence. We focus on establishing results related to sufficient conditions for stability, weak sufficient conditions for stability, robust stability conditions and converse Lyapunov theorems. In this work a primary assumption is that the stochastic system satisfies some mild regularity properties with respect to the state variable and random input. The regularity properties are needed to establish the existence of random solutions and results on sequential compactness for the solution set of the stochastic system.We now explain briefly the four main types of analysis tools studied in this work.Sufficient conditions for stability establish conditions involving Lyapunov-like functionssatisfying strict decrease properties along solutions that are needed to verify stability properties. Weak sufficient conditions relax the strict decrease nature of the Lyapunov like function along solutions and rely on either knowledge about the behavior of thesolutions on certain level sets of the Lyapunov-like function or use multiple nested non-strict Lyapunov-like functions to conclude stability properties. The invariance principleand Matrosov function theory fall in to this category. Robust stability conditions determinewhen stability properties are robust to sufficiently small perturbations of thenominal system data. Robustness of stability is an important concept in the presenceof measurement errors, disturbances and parametric uncertainty for the nominal system.We study two approaches to verify robustness. The first approach to establish robustnessrelies on the regularity properties of the system data and the second approach isthrough the use of Lyapunov functions. Robustness analysis is an area where the notionof set-valued dynamical systems arise naturally and it emphasizes the reason for ourstudy of such systems. Finally, we focus on developing converse Lyapunov theorems forstochastic systems. Converse Lyapunov theorems are used to illustrate the equivalencebetween asymptotic properties of a system and the existence of a function that satisfiesa decrease condition along the solutions. Strong forms of the converse theorem implythe existence of smooth Lyapunov functions. A fundamental way in which our resultsdiffer from the results in the literature on converse theorems for stochastic systems isthat we exploit robustness of the stability property to establish the existence of a smoothLyapunov function
    corecore