15 research outputs found

    Aggregation and Control of Populations of Thermostatically Controlled Loads by Formal Abstractions

    Full text link
    This work discusses a two-step procedure, based on formal abstractions, to generate a finite-space stochastic dynamical model as an aggregation of the continuous temperature dynamics of a homogeneous population of Thermostatically Controlled Loads (TCL). The temperature of a single TCL is described by a stochastic difference equation and the TCL status (ON, OFF) by a deterministic switching mechanism. The procedure is formal as it allows the exact quantification of the error introduced by the abstraction -- as such it builds and improves on a known, earlier approximation technique in the literature. Further, the contribution discusses the extension to the case of a heterogeneous population of TCL by means of two approaches resulting in the notion of approximate abstractions. It moreover investigates the problem of global (population-level) regulation and load balancing for the case of TCL that are dependent on a control input. The procedure is tested on a case study and benchmarked against the mentioned alternative approach in the literature.Comment: 40 pages, 21 figures; the paper generalizes the result of conference publication: S. Esmaeil Zadeh Soudjani and A. Abate, "Aggregation of Thermostatically Controlled Loads by Formal Abstractions," Proceedings of the European Control Conference 2013, pp. 4232-4237. version 2: added references for section

    Quantitative Approximation of the Probability Distribution of a Markov Process by Formal Abstractions

    Full text link
    The goal of this work is to formally abstract a Markov process evolving in discrete time over a general state space as a finite-state Markov chain, with the objective of precisely approximating its state probability distribution in time, which allows for its approximate, faster computation by that of the Markov chain. The approach is based on formal abstractions and employs an arbitrary finite partition of the state space of the Markov process, and the computation of average transition probabilities between partition sets. The abstraction technique is formal, in that it comes with guarantees on the introduced approximation that depend on the diameters of the partitions: as such, they can be tuned at will. Further in the case of Markov processes with unbounded state spaces, a procedure for precisely truncating the state space within a compact set is provided, together with an error bound that depends on the asymptotic properties of the transition kernel of the original process. The overall abstraction algorithm, which practically hinges on piecewise constant approximations of the density functions of the Markov process, is extended to higher-order function approximations: these can lead to improved error bounds and associated lower computational requirements. The approach is practically tested to compute probabilistic invariance of the Markov process under study, and is compared to a known alternative approach from the literature.Comment: 29 pages, Journal of Logical Methods in Computer Scienc

    Probabilistic Reach-Avoid Computation for Partially Degenerate Stochastic Processes

    No full text

    Dynamic Bayesian networks for formal verification of structured stochastic processes

    No full text
    We study the problem of finite-horizon probabilistic invariance for discrete-time Markov processes over general (uncountable) state spaces. We compute discrete-time, finite-state Markov chains as formal abstractions of the given Markov processes. Our abstraction differs from existing approaches in two ways: first, we exploit the structure of the underlying Markov process to compute the abstraction separately for each dimension; second, we employ dynamic Bayesian networks (DBN) as compact representations of the abstraction. In contrast, approaches which represent and store the (exponentially large) Markov chain explicitly incur significantly higher memory requirements. In our experiments, explicit representations scaled to models of dimension less than half the size as those analyzable by DBN representations. We show how to construct a DBN abstraction of a Markov process satisfying an independence assumption on the driving process noise. We compute a guaranteed bound on the error in the abstraction w.r.t. the probabilistic invariance property—the dimension-dependent abstraction makes the error bounds more precise than existing approaches. Additionally, we show how factor graphs and the sum-product algorithm for DBNs can be used to solve the finite-horizon probabilistic invariance problem. Together, DBN-based representations and algorithms can be significantly more efficient than explicit representations of Markov chains for abstracting and model checking structured Markov processes

    Dynamic Bayesian Networks as Formal Abstractions of Structured Stochastic Processes

    Get PDF
    We study the problem of finite-horizon probabilistic invariance for discrete-time Markov processes over general (uncountable) state spaces. We compute discrete-time, finite-state Markov chains as formal abstractions of general Markov processes. Our abstraction differs from existing approaches in two ways. First, we exploit the structure of the underlying Markov process to compute the abstraction separately for each dimension. Second, we employ dynamic Bayesian networks (DBN) as compact representations of the abstraction. In contrast, existing approaches represent and store the (exponentially large) Markov chain explicitly, which leads to heavy memory requirements limiting the application to models of dimension less than half, according to our experiments. We show how to construct a DBN abstraction of a Markov process satisfying an independence assumption on the driving process noise. We compute a guaranteed bound on the error in the abstraction w.r.t.\ the probabilistic invariance property; the dimension-dependent abstraction makes the error bounds more precise than existing approaches. Additionally, we show how factor graphs and the sum-product algorithm for DBNs can be used to solve the finite-horizon probabilistic invariance problem. Together, DBN-based representations and algorithms can be significantly more efficient than explicit representations of Markov chains for abstracting and model checking structured Markov processes.Comment: Accepted in 26th Conference on Concurrency Theor

    Safety Verification of Continuous-Space Pure Jump Markov Processes

    No full text
    We study the probabilistic safety verification problem for pure jump Markov processes, a class of models that generalizes continuous-time Markov chains over continuous (uncountable) state spaces. Solutions of these processes are piecewise constant, right-continuous functions from time to states. Their jump (or reset) times are realizations of a Poisson process, characterized by a jump rate function that can be both time- and state-dependent. Upon jumping in time, the new state of the solution process is specified according to a (continuous) stochastic conditional kernel. After providing a full characterization of safety properties of these processes, we describe a formal method to abstract the process as a finite-state discrete-time Markov chain; this approach is formal in that it provides a-priori error bounds on the precision of the abstraction, based on the continuity properties of the stochastic kernel of the process and of its jump rate function. We illustrate the approach on a case study of thermostatically controlled loads

    Compositional Abstractions of Interconnected Discrete-Time Stochastic Control Systems

    Get PDF
    This paper is concerned with a compositional approach for constructing abstractions of interconnected discrete-time stochastic control systems. The abstraction framework is based on new notions of so-called stochastic simulation functions, using which one can quantify the distance between original interconnected stochastic control systems and their abstractions in the probabilistic setting. Accordingly, one can leverage the proposed results to perform analysis and synthesis over abstract interconnected systems, and then carry the results over concrete ones. In the first part of the paper, we derive sufficient small-gain type conditions for the compositional quantification of the distance in probability between the interconnection of stochastic control subsystems and that of their abstractions. In the second part of the paper, we focus on the class of discrete-time linear stochastic control systems with independent noises in the abstract and concrete subsystems. For this class of systems, we propose a computational scheme to construct abstractions together with their corresponding stochastic simulation functions. We demonstrate the effectiveness of the proposed results by constructing an abstraction (totally 4 dimensions) of the interconnection of four discrete-time linear stochastic control subsystems (together 100 dimensions) in a compositional fashion

    Compositional Construction of Finite State Abstractions for Stochastic Control Systems

    No full text
    Controller synthesis techniques for continuous systems with respect to temporal logic specifications typically use a finite-state symbolic abstraction of the system. Constructing this abstraction for the entire system is computationally expensive, and does not exploit natural decompositions of many systems into interacting components. We have recently introduced a new relation, called (approximate) disturbance bisimulation for compositional symbolic abstraction to help scale controller synthesis for temporal logic to larger systems. In this paper, we extend the results to stochastic control systems modeled by stochastic differential equations. Given any stochastic control system satisfying a stochastic version of the incremental input-to-state stability property and a positive error bound, we show how to construct a finite-state transition system (if there exists one) which is disturbance bisimilar to the given stochastic control system. Given a network of stochastic control systems, we give conditions on the simultaneous existence of disturbance bisimilar abstractions to every component allowing for compositional abstraction of the network system.Comment: This paper has been accepted for publication in CDC 201

    Robust model predictive control with signal temporal logic constraints for Barcelona wastewater system

    Get PDF
    Trabajo presentado al 20th IFAC (International Federation of Automatic Control) World Congress, celebrado en Toulouse (Francia) del 9 al 14 de julio de 2017.We propose a traceable approach for the control of the Barcelona wastewater system that is subject to sudden weather-change events within the Mediterranean climate. Due to the unpredictable weather changes, lack of appropriate control methodologies may result in overflow in the sewage system, which causes environmental contamination (pollution). In order to improve the management of the wastewater system and to reduce the contamination, we propose robust model predictive control, which is an online control approach that designs the control actions (i.e., flows through network actuators) under the worst-case scenario while minimizing the associated operational costs. We employ signal temporal logic to specify the desired behavior of the controlled system once an overflow occurs and encode this behavior as constraints so that the synthesized controller reacts in time to decrease and eliminate the overflow. We apply our proposed technique to a representative catchment of the Barcelona wastewater system to illustrate its effectiveness.The work of C. Ocampo-Martinez is partially supported by the project ECOCIS (Ref. DPI2013-48243-C2-1-R) from the Spanish MINECO/FEDER.Peer Reviewe
    corecore