9,654 research outputs found

    Approximate Kalman-Bucy filter for continuous-time semi-Markov jump linear systems

    Full text link
    The aim of this paper is to propose a new numerical approximation of the Kalman-Bucy filter for semi-Markov jump linear systems. This approximation is based on the selection of typical trajectories of the driving semi-Markov chain of the process by using an optimal quantization technique. The main advantage of this approach is that it makes pre-computations possible. We derive a Lipschitz property for the solution of the Riccati equation and a general result on the convergence of perturbed solutions of semi-Markov switching Riccati equations when the perturbation comes from the driving semi-Markov chain. Based on these results, we prove the convergence of our approximation scheme in a general infinite countable state space framework and derive an error bound in terms of the quantization error and time discretization step. We employ the proposed filter in a magnetic levitation example with markovian failures and compare its performance with both the Kalman-Bucy filter and the Markovian linear minimum mean squares estimator

    Hybrid Behaviour of Markov Population Models

    Full text link
    We investigate the behaviour of population models written in Stochastic Concurrent Constraint Programming (sCCP), a stochastic extension of Concurrent Constraint Programming. In particular, we focus on models from which we can define a semantics of sCCP both in terms of Continuous Time Markov Chains (CTMC) and in terms of Stochastic Hybrid Systems, in which some populations are approximated continuously, while others are kept discrete. We will prove the correctness of the hybrid semantics from the point of view of the limiting behaviour of a sequence of models for increasing population size. More specifically, we prove that, under suitable regularity conditions, the sequence of CTMC constructed from sCCP programs for increasing population size converges to the hybrid system constructed by means of the hybrid semantics. We investigate in particular what happens for sCCP models in which some transitions are guarded by boolean predicates or in the presence of instantaneous transitions

    Comparing hitting time behaviour of Markov jump processes and their diffusion approximations

    Get PDF
    Markov jump processes can provide accurate models in many applications, notably chemical and biochemical kinetics, and population dynamics. Stochastic differential equations offer a computationally efficient way to approximate these processes. It is therefore of interest to establish results that shed light on the extent to which the jump and diffusion models agree. In this work we focus on mean hitting time behavior in a thermodynamic limit. We study three simple types of reactions where analytical results can be derived, and we find that the match between mean hitting time behavior of the two models is vastly different in each case. In particular, for a degradation reaction we find that the relative discrepancy decays extremely slowly, namely, as the inverse of the logarithm of the system size. After giving some further computational results, we conclude by pointing out that studying hitting times allows the Markov jump and stochastic differential equation regimes to be compared in a manner that avoids pitfalls that may invalidate other approaches

    The stability of a graph partition: A dynamics-based framework for community detection

    Full text link
    Recent years have seen a surge of interest in the analysis of complex networks, facilitated by the availability of relational data and the increasingly powerful computational resources that can be employed for their analysis. Naturally, the study of real-world systems leads to highly complex networks and a current challenge is to extract intelligible, simplified descriptions from the network in terms of relevant subgraphs, which can provide insight into the structure and function of the overall system. Sparked by seminal work by Newman and Girvan, an interesting line of research has been devoted to investigating modular community structure in networks, revitalising the classic problem of graph partitioning. However, modular or community structure in networks has notoriously evaded rigorous definition. The most accepted notion of community is perhaps that of a group of elements which exhibit a stronger level of interaction within themselves than with the elements outside the community. This concept has resulted in a plethora of computational methods and heuristics for community detection. Nevertheless a firm theoretical understanding of most of these methods, in terms of how they operate and what they are supposed to detect, is still lacking to date. Here, we will develop a dynamical perspective towards community detection enabling us to define a measure named the stability of a graph partition. It will be shown that a number of previously ad-hoc defined heuristics for community detection can be seen as particular cases of our method providing us with a dynamic reinterpretation of those measures. Our dynamics-based approach thus serves as a unifying framework to gain a deeper understanding of different aspects and problems associated with community detection and allows us to propose new dynamically-inspired criteria for community structure.Comment: 3 figures; published as book chapte

    Formal Controller Synthesis for Markov Jump Linear Systems with Uncertain Dynamics

    Full text link
    Automated synthesis of provably correct controllers for cyber-physical systems is crucial for deployment in safety-critical scenarios. However, hybrid features and stochastic or unknown behaviours make this problem challenging. We propose a method for synthesising controllers for Markov jump linear systems (MJLSs), a class of discrete-time models for cyber-physical systems, so that they certifiably satisfy probabilistic computation tree logic (PCTL) formulae. An MJLS consists of a finite set of stochastic linear dynamics and discrete jumps between these dynamics that are governed by a Markov decision process (MDP). We consider the cases where the transition probabilities of this MDP are either known up to an interval or completely unknown. Our approach is based on a finite-state abstraction that captures both the discrete (mode-jumping) and continuous (stochastic linear) behaviour of the MJLS. We formalise this abstraction as an interval MDP (iMDP) for which we compute intervals of transition probabilities using sampling techniques from the so-called 'scenario approach', resulting in a probabilistically sound approximation. We apply our method to multiple realistic benchmark problems, in particular, a temperature control and an aerial vehicle delivery problem.Comment: 14 pages, 6 figures, under review at QES

    Probabilistic Hybrid Action Models for Predicting Concurrent Percept-driven Robot Behavior

    Full text link
    This article develops Probabilistic Hybrid Action Models (PHAMs), a realistic causal model for predicting the behavior generated by modern percept-driven robot plans. PHAMs represent aspects of robot behavior that cannot be represented by most action models used in AI planning: the temporal structure of continuous control processes, their non-deterministic effects, several modes of their interferences, and the achievement of triggering conditions in closed-loop robot plans. The main contributions of this article are: (1) PHAMs, a model of concurrent percept-driven behavior, its formalization, and proofs that the model generates probably, qualitatively accurate predictions; and (2) a resource-efficient inference method for PHAMs based on sampling projections from probabilistic action models and state descriptions. We show how PHAMs can be applied to planning the course of action of an autonomous robot office courier based on analytical and experimental results
    corecore