12,228 research outputs found

    Deep Reinforcement Learning for Wireless Sensor Scheduling in Cyber-Physical Systems

    Full text link
    In many Cyber-Physical Systems, we encounter the problem of remote state estimation of geographically distributed and remote physical processes. This paper studies the scheduling of sensor transmissions to estimate the states of multiple remote, dynamic processes. Information from the different sensors have to be transmitted to a central gateway over a wireless network for monitoring purposes, where typically fewer wireless channels are available than there are processes to be monitored. For effective estimation at the gateway, the sensors need to be scheduled appropriately, i.e., at each time instant one needs to decide which sensors have network access and which ones do not. To address this scheduling problem, we formulate an associated Markov decision process (MDP). This MDP is then solved using a Deep Q-Network, a recent deep reinforcement learning algorithm that is at once scalable and model-free. We compare our scheduling algorithm to popular scheduling algorithms such as round-robin and reduced-waiting-time, among others. Our algorithm is shown to significantly outperform these algorithms for many example scenarios

    Remote State Estimation with Smart Sensors over Markov Fading Channels

    Full text link
    We consider a fundamental remote state estimation problem of discrete-time linear time-invariant (LTI) systems. A smart sensor forwards its local state estimate to a remote estimator over a time-correlated MM-state Markov fading channel, where the packet drop probability is time-varying and depends on the current fading channel state. We establish a necessary and sufficient condition for mean-square stability of the remote estimation error covariance as ρ2(A)ρ(DM)<1\rho^2(\mathbf{A})\rho(\mathbf{DM})<1, where ρ()\rho(\cdot) denotes the spectral radius, A\mathbf{A} is the state transition matrix of the LTI system, D\mathbf{D} is a diagonal matrix containing the packet drop probabilities in different channel states, and M\mathbf{M} is the transition probability matrix of the Markov channel states. To derive this result, we propose a novel estimation-cycle based approach, and provide new element-wise bounds of matrix powers. The stability condition is verified by numerical results, and is shown more effective than existing sufficient conditions in the literature. We observe that the stability region in terms of the packet drop probabilities in different channel states can either be convex or concave depending on the transition probability matrix M\mathbf{M}. Our numerical results suggest that the stability conditions for remote estimation may coincide for setups with a smart sensor and with a conventional one (which sends raw measurements to the remote estimator), though the smart sensor setup achieves a better estimation performance.Comment: The paper has been accepted by IEEE Transactions on Automatic Control. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Transmission Power Scheduling for Energy Harvesting Sensor in Remote State Estimation

    Get PDF
    We study remote estimation in a wireless sensor network. Instead of using a conventional battery-powered sensor, a sensor equipped with an energy harvester which can obtain energy from the external environment is utilized. We formulate this problem into an infinite time-horizon Markov decision process and provide the optimal sensor transmission power control strategy. In addition, a sub-optimal strategy which is easier to implement and requires less computation is presented. A numerical example is provided to illustrate the implementation of the sub-optimal policy and evaluation of its estimation performance.Comment: Extended version of article to be published in the Proceedings of the 19th IFAC World Congress, 201
    corecore