7,678 research outputs found

    Computation-Communication Trade-offs and Sensor Selection in Real-time Estimation for Processing Networks

    Full text link
    Recent advances in electronics are enabling substantial processing to be performed at each node (robots, sensors) of a networked system. Local processing enables data compression and may mitigate measurement noise, but it is still slower compared to a central computer (it entails a larger computational delay). However, while nodes can process the data in parallel, the centralized computational is sequential in nature. On the other hand, if a node sends raw data to a central computer for processing, it incurs communication delay. This leads to a fundamental communication-computation trade-off, where each node has to decide on the optimal amount of preprocessing in order to maximize the network performance. We consider a network in charge of estimating the state of a dynamical system and provide three contributions. First, we provide a rigorous problem formulation for optimal real-time estimation in processing networks in the presence of delays. Second, we show that, in the case of a homogeneous network (where all sensors have the same computation) that monitors a continuous-time scalar linear system, the optimal amount of local preprocessing maximizing the network estimation performance can be computed analytically. Third, we consider the realistic case of a heterogeneous network monitoring a discrete-time multi-variate linear system and provide algorithms to decide on suitable preprocessing at each node, and to select a sensor subset when computational constraints make using all sensors suboptimal. Numerical simulations show that selecting the sensors is crucial. Moreover, we show that if the nodes apply the preprocessing policy suggested by our algorithms, they can largely improve the network estimation performance.Comment: 15 pages, 16 figures. Accepted journal versio

    Deep Reinforcement Learning for Wireless Sensor Scheduling in Cyber-Physical Systems

    Full text link
    In many Cyber-Physical Systems, we encounter the problem of remote state estimation of geographically distributed and remote physical processes. This paper studies the scheduling of sensor transmissions to estimate the states of multiple remote, dynamic processes. Information from the different sensors have to be transmitted to a central gateway over a wireless network for monitoring purposes, where typically fewer wireless channels are available than there are processes to be monitored. For effective estimation at the gateway, the sensors need to be scheduled appropriately, i.e., at each time instant one needs to decide which sensors have network access and which ones do not. To address this scheduling problem, we formulate an associated Markov decision process (MDP). This MDP is then solved using a Deep Q-Network, a recent deep reinforcement learning algorithm that is at once scalable and model-free. We compare our scheduling algorithm to popular scheduling algorithms such as round-robin and reduced-waiting-time, among others. Our algorithm is shown to significantly outperform these algorithms for many example scenarios

    On Multi-Step Sensor Scheduling via Convex Optimization

    Full text link
    Effective sensor scheduling requires the consideration of long-term effects and thus optimization over long time horizons. Determining the optimal sensor schedule, however, is equivalent to solving a binary integer program, which is computationally demanding for long time horizons and many sensors. For linear Gaussian systems, two efficient multi-step sensor scheduling approaches are proposed in this paper. The first approach determines approximate but close to optimal sensor schedules via convex optimization. The second approach combines convex optimization with a \BB search for efficiently determining the optimal sensor schedule.Comment: 6 pages, appeared in the proceedings of the 2nd International Workshop on Cognitive Information Processing (CIP), Elba, Italy, June 201

    Active Classification for POMDPs: a Kalman-like State Estimator

    Full text link
    The problem of state tracking with active observation control is considered for a system modeled by a discrete-time, finite-state Markov chain observed through conditionally Gaussian measurement vectors. The measurement model statistics are shaped by the underlying state and an exogenous control input, which influence the observations' quality. Exploiting an innovations approach, an approximate minimum mean-squared error (MMSE) filter is derived to estimate the Markov chain system state. To optimize the control strategy, the associated mean-squared error is used as an optimization criterion in a partially observable Markov decision process formulation. A stochastic dynamic programming algorithm is proposed to solve for the optimal solution. To enhance the quality of system state estimates, approximate MMSE smoothing estimators are also derived. Finally, the performance of the proposed framework is illustrated on the problem of physical activity detection in wireless body sensing networks. The power of the proposed framework lies within its ability to accommodate a broad spectrum of active classification applications including sensor management for object classification and tracking, estimation of sparse signals and radar scheduling.Comment: 38 pages, 6 figure
    • …
    corecore