355 research outputs found

    Remote State Estimation with Smart Sensors over Markov Fading Channels

    Full text link
    We consider a fundamental remote state estimation problem of discrete-time linear time-invariant (LTI) systems. A smart sensor forwards its local state estimate to a remote estimator over a time-correlated MM-state Markov fading channel, where the packet drop probability is time-varying and depends on the current fading channel state. We establish a necessary and sufficient condition for mean-square stability of the remote estimation error covariance as ρ2(A)ρ(DM)<1\rho^2(\mathbf{A})\rho(\mathbf{DM})<1, where ρ()\rho(\cdot) denotes the spectral radius, A\mathbf{A} is the state transition matrix of the LTI system, D\mathbf{D} is a diagonal matrix containing the packet drop probabilities in different channel states, and M\mathbf{M} is the transition probability matrix of the Markov channel states. To derive this result, we propose a novel estimation-cycle based approach, and provide new element-wise bounds of matrix powers. The stability condition is verified by numerical results, and is shown more effective than existing sufficient conditions in the literature. We observe that the stability region in terms of the packet drop probabilities in different channel states can either be convex or concave depending on the transition probability matrix M\mathbf{M}. Our numerical results suggest that the stability conditions for remote estimation may coincide for setups with a smart sensor and with a conventional one (which sends raw measurements to the remote estimator), though the smart sensor setup achieves a better estimation performance.Comment: The paper has been accepted by IEEE Transactions on Automatic Control. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Consensus and Products of Random Stochastic Matrices: Exact Rate for Convergence in Probability

    Full text link
    Distributed consensus and other linear systems with system stochastic matrices WkW_k emerge in various settings, like opinion formation in social networks, rendezvous of robots, and distributed inference in sensor networks. The matrices WkW_k are often random, due to, e.g., random packet dropouts in wireless sensor networks. Key in analyzing the performance of such systems is studying convergence of matrix products WkWk1...W1W_kW_{k-1}... W_1. In this paper, we find the exact exponential rate II for the convergence in probability of the product of such matrices when time kk grows large, under the assumption that the WkW_k's are symmetric and independent identically distributed in time. Further, for commonly used random models like with gossip and link failure, we show that the rate II is found by solving a min-cut problem and, hence, easily computable. Finally, we apply our results to optimally allocate the sensors' transmission power in consensus+innovations distributed detection

    Sparse and Constrained Stochastic Predictive Control for Networked Systems

    Full text link
    This article presents a novel class of control policies for networked control of Lyapunov-stable linear systems with bounded inputs. The control channel is assumed to have i.i.d. Bernoulli packet dropouts and the system is assumed to be affected by additive stochastic noise. Our proposed class of policies is affine in the past dropouts and saturated values of the past disturbances. We further consider a regularization term in a quadratic performance index to promote sparsity in control. We demonstrate how to augment the underlying optimization problem with a constant negative drift constraint to ensure mean-square boundedness of the closed-loop states, yielding a convex quadratic program to be solved periodically online. The states of the closed-loop plant under the receding horizon implementation of the proposed class of policies are mean square bounded for any positive bound on the control and any non-zero probability of successful transmission

    Kalman Filtering Over a Packet-Dropping Network: A Probabilistic Perspective

    Get PDF
    We consider the problem of state estimation of a discrete time process over a packet-dropping network. Previous work on Kalman filtering with intermittent observations is concerned with the asymptotic behavior of E[P_k], i.e., the expected value of the error covariance, for a given packet arrival rate. We consider a different performance metric, Pr[P_k ≤ M], i.e., the probability that P_k is bounded by a given M. We consider two scenarios in the paper. In the first scenario, when the sensor sends its measurement data to the remote estimator via a packet-dropping network, we derive lower and upper bounds on Pr[P_k ≤ M]. In the second scenario, when the sensor preprocesses the measurement data and sends its local state estimate to the estimator, we show that the previously derived lower and upper bounds are equal to each other, hence we are able to provide a closed form expression for Pr[P_k ≤ M]. We also recover the results in the literature when using Pr[P_k ≤ M] as a metric for scalar systems. Examples are provided to illustrate the theory developed in the paper

    Distributed averaging over communication networks:Fragility, robustness and opportunities

    Get PDF
    Distributed averaging, a canonical operation among many natural interconnected systems, has found applications in a tremendous variety of applied fields, including statistical physics, signal processing, systems and control, communication and social science. As information exchange is a central part of distributed averaging systems, it is of practical as well as theoretical importance to understand various properties/limitations of those systems in the presence of communication constraints and devise new algorithms to alleviate those limitations. We study the fragility of a popular distributed averaging algorithm when the information exchange among the nodes is limited by communication delays, fading connections and additive noise. We show that the otherwise well studied and benign multi-agent system can generate a collective global complex behavior. We characterize this behavior, common to many natural and human-made interconnected systems, as a collective hyper-jump diffusion process and as a L\\u27{e}vy flights process in a special case. We further describe the mechanism for its emergence and predict its occurrence, under standard assumptions, by checking the Mean Square instability of a certain part of the system. We show that the strong connectivity property of the network topology guarantees that the complex behavior is global and manifested by all the agents in the network, even though the source of uncertainty is localized. We provide novel computational analysis of the MS stability index under spatially invariant structures and gain certain qualitative as well as quantitative insights of the system. We then focus on design of agents\u27 dynamics to increase the robustness of distributed averaging system to topology variations. We provide a general structure of distributed averaging systems where individual agents are modeled by LTI systems. We show the problem of designing agents\u27 dynamics for distributed averaging is equivalent to an H\mathcal{H}_{\infty} minimization problem. In this way, we could use tools from robust control theory to build the distributed averaging system where the design is fully distributed and scalable with the size of the network. It is also shown that the agents could be used in different fixed networks and networks with speical time varying interconnections. We develop new iterative distributed averaging algorithms which allow agents to compute the average quantity in the presence of additive noise and random changing interconnections. The algorithm relaxes several previous restrictive assumptions on distributed averaging under uncertainties, such as diminishing step size rule, doubly stochastic weights, symmetric link switching styles, etc, and introduces novel mechanism of network feedback to mitigate effects of communication uncertainties on information aggregation. Based on the robust distributed averaging algorithm, we propose continuous as well as discrete time computation models to solve the distributed optimization problem where the objective function is formed by the summation of convex functions of the same variable. The algorithm shows faster convergence speed than existing ones and exhibits robustness to additive noise, which is the main source of limitation on algorithms based on convex mixing. It is shown that agents with simple dynamics and gradient sensing abilities could collectively solve complicated convex optimization problems. Finally, we generalize this algorithm to build a general framework forconstrained convex optimization problems. This framework is shown to be particularly effective to derive solutions for distributed decision making problems and lead to a systems perspective for convex optimization

    Statistical Learning for Analysis of Networked Control Systems over Unknown Channels

    Full text link
    Recent control trends are increasingly relying on communication networks and wireless channels to close the loop for Internet-of-Things applications. Traditionally these approaches are model-based, i.e., assuming a network or channel model they are focused on stability analysis and appropriate controller designs. However the availability of such wireless channel modeling is fundamentally challenging in practice as channels are typically unknown a priori and only available through data samples. In this work we aim to develop algorithms that rely on channel sample data to determine the stability and performance of networked control tasks. In this regard our work is the first to characterize the amount of channel modeling that is required to answer such a question. Specifically we examine how many channel data samples are required in order to answer with high confidence whether a given networked control system is stable or not. This analysis is based on the notion of sample complexity from the learning literature and is facilitated by concentration inequalities. Moreover we establish a direct relation between the sample complexity and the networked system stability margin, i.e., the underlying packet success rate of the channel and the spectral radius of the dynamics of the control system. This illustrates that it becomes impractical to verify stability under a large range of plant and channel configurations. We validate our theoretical results in numerical simulations

    On the Value of Linear Quadratic Zero-sum Difference Games with Multiplicative Randomness: Existence and Achievability

    Full text link
    We consider a wireless networked control system (WNCS) with multiple controllers and multiple attackers. The dynamic interaction between the controllers and the attackers is modeled as a linear quadratic (LQ) zero-sum difference game with multiplicative randomness induced by the multiple-input and multiple-output (MIMO) wireless fading channels of the controllers and attackers. We focus on analyzing the existence and achievability of the value of the zero-sum game. We first establish a general sufficient and necessary condition for the existence of the game value. This condition relies on the solvability of a modified game algebraic Riccati equation (MGARE) under an implicit concavity constraint, which is generally difficult to verify due to the intermittent controllability or almost sure uncontrollability caused by the multiplicative randomness. We then introduce a new positive semidefinite (PSD) kernel decomposition method induced by multiplicative randomness, through which we obtain a closed-form tight verifiable sufficient condition. Under the existence condition, we finally construct a saddle-point policy that is able to achieve the game value in a certain class of admissible policies. We demonstrate that the proposed saddle-point policy is backward compatible to the existing strictly feedback stabilizing saddle-point policy.Comment: 32 pages, 3 figure
    corecore