337 research outputs found

    Gossip and Distributed Kalman Filtering: Weak Consensus under Weak Detectability

    Full text link
    The paper presents the gossip interactive Kalman filter (GIKF) for distributed Kalman filtering for networked systems and sensor networks, where inter-sensor communication and observations occur at the same time-scale. The communication among sensors is random; each sensor occasionally exchanges its filtering state information with a neighbor depending on the availability of the appropriate network link. We show that under a weak distributed detectability condition: 1. the GIKF error process remains stochastically bounded, irrespective of the instability properties of the random process dynamics; and 2. the network achieves \emph{weak consensus}, i.e., the conditional estimation error covariance at a (uniformly) randomly selected sensor converges in distribution to a unique invariant measure on the space of positive semi-definite matrices (independent of the initial state.) To prove these results, we interpret the filtered states (estimates and error covariances) at each node in the GIKF as stochastic particles with local interactions. We analyze the asymptotic properties of the error process by studying as a random dynamical system the associated switched (random) Riccati equation, the switching being dictated by a non-stationary Markov chain on the network graph.Comment: Submitted to the IEEE Transactions, 30 pages

    Stability estimating in optimal stopping problem

    Get PDF
    summary:We consider the optimal stopping problem for a discrete-time Markov process on a Borel state space XX. It is supposed that an unknown transition probability p(x)p(\cdot |x), xXx\in X, is approximated by the transition probability p~(x)\widetilde{p}(\cdot |x), xXx\in X, and the stopping rule τ~\widetilde{\tau }_*, optimal for p~\widetilde{p}, is applied to the process governed by pp. We found an upper bound for the difference between the total expected cost, resulting when applying τ~\widetilde{\tau }_*, and the minimal total expected cost. The bound given is a constant times supxXp(x)p~(x)\displaystyle \sup \nolimits _{x\in X}\Vert p(\cdot |x)-\widetilde{p}(\cdot |x)\Vert , where \Vert \cdot \Vert is the total variation norm

    Stochastic stability research for complex power systems

    Get PDF
    Bibliography: p. 302-311."November 1980." "Midterm report ... ."U.S. Dept. of Energy Contract ET-76-A-01-2295Tobias A. Trygar

    Estimation and control of non-linear and hybrid systems with applications to air-to-air guidance

    Get PDF
    Issued as Progress report, and Final report, Project no. E-21-67

    ISIPTA'07: Proceedings of the Fifth International Symposium on Imprecise Probability: Theories and Applications

    Get PDF
    B

    Formal Methods for Autonomous Systems

    Full text link
    Formal methods refer to rigorous, mathematical approaches to system development and have played a key role in establishing the correctness of safety-critical systems. The main building blocks of formal methods are models and specifications, which are analogous to behaviors and requirements in system design and give us the means to verify and synthesize system behaviors with formal guarantees. This monograph provides a survey of the current state of the art on applications of formal methods in the autonomous systems domain. We consider correct-by-construction synthesis under various formulations, including closed systems, reactive, and probabilistic settings. Beyond synthesizing systems in known environments, we address the concept of uncertainty and bound the behavior of systems that employ learning using formal methods. Further, we examine the synthesis of systems with monitoring, a mitigation technique for ensuring that once a system deviates from expected behavior, it knows a way of returning to normalcy. We also show how to overcome some limitations of formal methods themselves with learning. We conclude with future directions for formal methods in reinforcement learning, uncertainty, privacy, explainability of formal methods, and regulation and certification

    A Bayesian Approach to Learning Hidden Markov Model Topology with Applications to Biological Sequence Analysis

    Get PDF
    Hidden-Markov-Models (HMMs) are a widely and successfully used tool in statistical modeling and statistical pattern recognition. One fundamental problem in the application of HMMs is finding the underlying architecture or topology, particularly when there is no strong evidence from the application domain — e.g., when doing black box modeling. Topology is important with regard to good parameter estimates and with regard to performance: A model with “too many” states — and hence too many parameters — requires too much training data while an model with “not enough” states impedes the HMM from capturing subtle statistical patterns. We have developed a novel algorithm that, given sequence data originating from an ergodic process, infers an HMM, its topology and its parameters. We introduce a Bayesian approach

    Normas e estabilidade para modelos estocásticos cuja variação do controle e do estado aumentam a incerteza

    Get PDF
    Orientador: João Bosco Ribeiro do ValDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: Essa dissertação de mestrado gira em torno da discussão sobre controle de sistemas incertos. Modelos matemáticos utilizados como base para o design de controladores automáticos são naturalmente uma representação aproximada do sistema real, o que, em conjunto com perturbações externas e dinâmica não modelada, gera incertezas a respeito dos sistemas estudados. Na literatura de controle, este tema vêm sendo discutido frequentemente, em particular nas sub-áreas de controle estocástico e controle robusto. Dentre as técnicas desenvolvidas dentro da teoria de controle estocástico, uma proposta recente se diferencia das demais por basear-se na idéia de que variações abruptas na política de controle possam acarretar em maiores incertezas a respeito do sistema. Matematicamente, essa noção é representada pelo uso de um ruído estocástico dependente do módulo da ação de controle, e a técnica foi apelidada de VCAI - acrônimo para variação do controle aumenta a incerteza. A definição da política de controle ótima correspondente, obtida por meio do método de programação dinâmica, mostra a existência de uma região ao redor do ponto de equilíbrio para a qual a política ótima é manter a ação de controle do equilíbrio inalterada, um resultado que parece particular à abordagem VCAI, mas que pode ser relacionado a políticas de gerenciamento cautelosas em áreas como economia e biologia. O problema de controle ótimo VCAI foi anteriormente resolvido ao adotar-se um critério de custo quadrático descontado e um horizonte de otimização infinito, e nessa dissertação nós utilizamos essa solução para atacar o problema de custo médio a longo prazo. Dada certa semelhança entre a estrutura do ruído estocástico na abordavem VCAI e modelos utilizados na teoria de controle robusto, discutimos ainda possíveis relações entre a abordagem proposta e controladores robustos. Discutimos ainda algumas possíveis aplicações do modelo propostoAbstract: This work discusses a new approach to the control of uncertain systems. Uncertain systems and their representation is a recurrent theme in control theory: approximate mathematical models, unmodeled dynamics and external disturbances are all sources of uncertainties in automated systems, and the topic has been extensively studied in the control literature, particularly within the stochastic and robust control research areas. Within the stochastic framework, a recent approach, named CVIU - control variation increases uncertainty, for short -, was recently proposed. The approach differs from previous models for assuming that a control action might actually increase the uncertainty about an unknown system, a notion represented by the use of stochastic noise depending on the absolute value of the control input. Moreover, the solution of the corresponding stochastic optimal control problem shows the existence of a region around the equilibrium point in which the optimal action is to keep the equilibrium control action unchanged. The CVIU control problem was previously solved by adopting a discounted quadratic cost formulation, and in this work we extend this previous result and study the corresponding long run average control problem. We also discuss possible relations between the CVIU approach and models from robust control theory, and present some potential applications of the theory presented hereMestradoAutomaçãoMestre em Engenharia Elétrica2016/02208-6, 2017/10340-4FAPES
    corecore