58 research outputs found

    Energy Efficient Spectrum Sensing for State Estimation over A Wireless Channel

    Full text link
    The performance of remote estimation over wireless channel is strongly affected by sensor data losses due to interference. Although the impact of interference can be alleviated by performing spectrum sensing and then transmitting only when the channel is clear, the introduction of spectrum sensing also incurs extra energy expenditure. In this paper, we investigate the problem of energy efficient spectrum sensing for state estimation of a general linear dynamic system, and formulate an optimization problem which minimizes the total sensor energy consumption while guaranteeing a desired level of estimation performance. The optimal solution is evaluated through both analytical and simulation results.Comment: 4 pages, 6 figures, accepted to IEEE GlobalSIP 201

    Kalman Filtering Over a Packet-Dropping Network: A Probabilistic Perspective

    Get PDF
    We consider the problem of state estimation of a discrete time process over a packet-dropping network. Previous work on Kalman filtering with intermittent observations is concerned with the asymptotic behavior of E[P_k], i.e., the expected value of the error covariance, for a given packet arrival rate. We consider a different performance metric, Pr[P_k ≤ M], i.e., the probability that P_k is bounded by a given M. We consider two scenarios in the paper. In the first scenario, when the sensor sends its measurement data to the remote estimator via a packet-dropping network, we derive lower and upper bounds on Pr[P_k ≤ M]. In the second scenario, when the sensor preprocesses the measurement data and sends its local state estimate to the estimator, we show that the previously derived lower and upper bounds are equal to each other, hence we are able to provide a closed form expression for Pr[P_k ≤ M]. We also recover the results in the literature when using Pr[P_k ≤ M] as a metric for scalar systems. Examples are provided to illustrate the theory developed in the paper

    Static output-feedback stabilization of discrete-time Markovian jump linear systems: a system augmentation approach

    No full text
    This paper studies the static output-feedback (SOF) stabilization problem for discrete-time Markovian jump systems from a novel perspective. The closed-loop system is represented in a system augmentation form, in which input and gain-output matrices are separated. By virtue of the system augmentation, a novel necessary and sufficient condition for the existence of desired controllers is established in terms of a set of nonlinear matrix inequalities, which possess a monotonic structure for a linearized computation, and a convergent iteration algorithm is given to solve such inequalities. In addition, a special property of the feasible solutions enables one to further improve the solvability via a simple D-K type optimization on the initial values. An extension to mode-independent SOF stabilization is provided as well. Compared with some existing approaches to SOF synthesis, the proposed one has several advantages that make it specific for Markovian jump systems. The effectiveness and merit of the theoretical results are shown through some numerical example

    Deep Reinforcement Learning for Wireless Sensor Scheduling in Cyber-Physical Systems

    Full text link
    In many Cyber-Physical Systems, we encounter the problem of remote state estimation of geographically distributed and remote physical processes. This paper studies the scheduling of sensor transmissions to estimate the states of multiple remote, dynamic processes. Information from the different sensors have to be transmitted to a central gateway over a wireless network for monitoring purposes, where typically fewer wireless channels are available than there are processes to be monitored. For effective estimation at the gateway, the sensors need to be scheduled appropriately, i.e., at each time instant one needs to decide which sensors have network access and which ones do not. To address this scheduling problem, we formulate an associated Markov decision process (MDP). This MDP is then solved using a Deep Q-Network, a recent deep reinforcement learning algorithm that is at once scalable and model-free. We compare our scheduling algorithm to popular scheduling algorithms such as round-robin and reduced-waiting-time, among others. Our algorithm is shown to significantly outperform these algorithms for many example scenarios
    corecore