846 research outputs found

    Verified integration of differential equations with discrete delay

    Get PDF
    Many dynamic system models in population dynamics, physics and control involve temporally delayed state information in such a way that the evolution of future state trajectories depends not only on the current state as the initial condition but also on some previous state. In technical systems, such phenomena result, for example, from mass transport of incompressible fluids through finitely long pipelines, the transport of combustible material such as coal in power plants via conveyor belts, or information processing delays. Under the assumption of continuous dynamics, the corresponding delays can be treated either as constant and fixed, as uncertain but bounded and fixed, or even as state-dependent. In this paper, we restrict the discussion to the first two classes and provide suggestions on how interval-based verified approaches to solving ordinary differential equations can be extended to encompass such delay differential equations. Three close-to-life examples illustrate the theory

    Remote State Estimation with Smart Sensors over Markov Fading Channels

    Full text link
    We consider a fundamental remote state estimation problem of discrete-time linear time-invariant (LTI) systems. A smart sensor forwards its local state estimate to a remote estimator over a time-correlated MM-state Markov fading channel, where the packet drop probability is time-varying and depends on the current fading channel state. We establish a necessary and sufficient condition for mean-square stability of the remote estimation error covariance as ρ2(A)ρ(DM)<1\rho^2(\mathbf{A})\rho(\mathbf{DM})<1, where ρ()\rho(\cdot) denotes the spectral radius, A\mathbf{A} is the state transition matrix of the LTI system, D\mathbf{D} is a diagonal matrix containing the packet drop probabilities in different channel states, and M\mathbf{M} is the transition probability matrix of the Markov channel states. To derive this result, we propose a novel estimation-cycle based approach, and provide new element-wise bounds of matrix powers. The stability condition is verified by numerical results, and is shown more effective than existing sufficient conditions in the literature. We observe that the stability region in terms of the packet drop probabilities in different channel states can either be convex or concave depending on the transition probability matrix M\mathbf{M}. Our numerical results suggest that the stability conditions for remote estimation may coincide for setups with a smart sensor and with a conventional one (which sends raw measurements to the remote estimator), though the smart sensor setup achieves a better estimation performance.Comment: The paper has been accepted by IEEE Transactions on Automatic Control. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Parameterized Dataflow Scenarios

    Full text link

    Formal Controller Synthesis for Markov Jump Linear Systems with Uncertain Dynamics

    Full text link
    Automated synthesis of provably correct controllers for cyber-physical systems is crucial for deployment in safety-critical scenarios. However, hybrid features and stochastic or unknown behaviours make this problem challenging. We propose a method for synthesising controllers for Markov jump linear systems (MJLSs), a class of discrete-time models for cyber-physical systems, so that they certifiably satisfy probabilistic computation tree logic (PCTL) formulae. An MJLS consists of a finite set of stochastic linear dynamics and discrete jumps between these dynamics that are governed by a Markov decision process (MDP). We consider the cases where the transition probabilities of this MDP are either known up to an interval or completely unknown. Our approach is based on a finite-state abstraction that captures both the discrete (mode-jumping) and continuous (stochastic linear) behaviour of the MJLS. We formalise this abstraction as an interval MDP (iMDP) for which we compute intervals of transition probabilities using sampling techniques from the so-called 'scenario approach', resulting in a probabilistically sound approximation. We apply our method to multiple realistic benchmark problems, in particular, a temperature control and an aerial vehicle delivery problem.Comment: 14 pages, 6 figures, under review at QES

    Distributed H∞ Controller Design and Robustness Analysis for Vehicle Platooning Under Random Packet Drop

    Get PDF
    This paper presents the design of a robust distributed state-feedback controller in the discrete-time domain for homogeneous vehicle platoons with undirected topologies, whose dynamics are subjected to external disturbances and under random single packet drop scenario. A linear matrix inequality (LMI) approach is used for devising the control gains such that a bounded H∞ norm is guaranteed. Furthermore, a lower bound of the robustness measure, denoted as γ gain, is derived analytically for two platoon communication topologies, i.e., the bidirectional predecessor following (BPF) and the bidirectional predecessor leader following (BPLF). It is shown that the γ gain is highly affected by the communication topology and drastically reduces when the information of the leader is sent to all followers. Finally, numerical results demonstrate the ability of the proposed methodology to impose the platoon control objective for the BPF and BPLF topology under random single packet drop

    Multi-agent persistent monitoring of a finite set of targets

    Full text link
    The general problem of multi-agent persistent monitoring finds applications in a variety of domains ranging from meter to kilometer-scale systems, such as surveillance or environmental monitoring, down to nano-scale systems such as tracking biological macromolecules for studying basic biology and disease. The problem can be cast as moving the agents between targets, acquiring information from or in some fashion controlling the states of the targets. Under this formulation, at least two questions need to be addressed. The first is the design of motion trajectories for the agents as they move among the spatially distributed targets and jointly optimize a given cost function that describes some desired application. The second is the design of the controller that an agent will use at a target to steer the target's state as desired. The first question can be viewed in at least two ways: first, as an optimal control problem with the domain of the targets described as a continuous space, and second as a discrete scheduling task. In this work we focus on the second approach, which formulates the target dynamics as a hybrid automaton, and the geometry of the targets as a graph. We show how to find solutions by translating the scheduling problem into a search for the optimal route. With a route specifying the visiting sequence in place, we derive the optimal time the agent spends at each target analytically. The second question, namely that of steering the target's state, can be formulated from the perspective of the target, rather than the agent. The mobile nature of the agents leads to intermittencontrol, such that the controller is assumed to be disconnected when no agent is at the target. The design of the visiting schedule of agents to one target can affect the reachability (controllability) of this target's control system and the design of any specific controller. Existing test techniques for reachability are combined with the idea of lifting to provide conditions on systems such that reachability is maintained in the presence of periodic disconnections from the controller. While considering an intermittently connected control with constraints on the control authority and in the presence of a disturbance, the concept of 'degree of controllability' is introduced. The degree is measured by a region of states that can be brought back to the origin in a given finite time. The size of this region is estimated to evaluate the performance of a given sequence
    corecore