779 research outputs found

    An Optimal Transmission Strategy for Kalman Filtering over Packet Dropping Links with Imperfect Acknowledgements

    Get PDF
    This paper presents a novel design methodology for optimal transmission policies at a smart sensor to remotely estimate the state of a stable linear stochastic dynamical system. The sensor makes measurements of the process and forms estimates of the state using a local Kalman filter. The sensor transmits quantized information over a packet dropping link to the remote receiver. The receiver sends packet receipt acknowledgments back to the sensor via an erroneous feedback communication channel which is itself packet dropping. The key novelty of this formulation is that the smart sensor decides, at each discrete time instant, whether to transmit a quantized version of either its local state estimate or its local innovation. The objective is to design optimal transmission policies in order to minimize a long term average cost function as a convex combination of the receiver's expected estimation error covariance and the energy needed to transmit the packets. The optimal transmission policy is obtained by the use of dynamic programming techniques. Using the concept of submodularity, the optimality of a threshold policy in the case of scalar systems with perfect packet receipt acknowledgments is proved. Suboptimal solutions and their structural results are also discussed. Numerical results are presented illustrating the performance of the optimal and suboptimal transmission policies.Comment: Conditionally accepted in IEEE Transactions on Control of Network System

    Event-Driven Optimal Feedback Control for Multi-Antenna Beamforming

    Full text link
    Transmit beamforming is a simple multi-antenna technique for increasing throughput and the transmission range of a wireless communication system. The required feedback of channel state information (CSI) can potentially result in excessive overhead especially for high mobility or many antennas. This work concerns efficient feedback for transmit beamforming and establishes a new approach of controlling feedback for maximizing net throughput, defined as throughput minus average feedback cost. The feedback controller using a stationary policy turns CSI feedback on/off according to the system state that comprises the channel state and transmit beamformer. Assuming channel isotropy and Markovity, the controller's state reduces to two scalars. This allows the optimal control policy to be efficiently computed using dynamic programming. Consider the perfect feedback channel free of error, where each feedback instant pays a fixed price. The corresponding optimal feedback control policy is proved to be of the threshold type. This result holds regardless of whether the controller's state space is discretized or continuous. Under the threshold-type policy, feedback is performed whenever a state variable indicating the accuracy of transmit CSI is below a threshold, which varies with channel power. The practical finite-rate feedback channel is also considered. The optimal policy for quantized feedback is proved to be also of the threshold type. The effect of CSI quantization is shown to be equivalent to an increment on the feedback price. Moreover, the increment is upper bounded by the expected logarithm of one minus the quantization error. Finally, simulation shows that feedback control increases net throughput of the conventional periodic feedback by up to 0.5 bit/s/Hz without requiring additional bandwidth or antennas.Comment: 29 pages; submitted for publicatio

    Energy Sharing for Multiple Sensor Nodes with Finite Buffers

    Full text link
    We consider the problem of finding optimal energy sharing policies that maximize the network performance of a system comprising of multiple sensor nodes and a single energy harvesting (EH) source. Sensor nodes periodically sense the random field and generate data, which is stored in the corresponding data queues. The EH source harnesses energy from ambient energy sources and the generated energy is stored in an energy buffer. Sensor nodes receive energy for data transmission from the EH source. The EH source has to efficiently share the stored energy among the nodes in order to minimize the long-run average delay in data transmission. We formulate the problem of energy sharing between the nodes in the framework of average cost infinite-horizon Markov decision processes (MDPs). We develop efficient energy sharing algorithms, namely Q-learning algorithm with exploration mechanisms based on the ϵ\epsilon-greedy method as well as upper confidence bound (UCB). We extend these algorithms by incorporating state and action space aggregation to tackle state-action space explosion in the MDP. We also develop a cross entropy based method that incorporates policy parameterization in order to find near optimal energy sharing policies. Through simulations, we show that our algorithms yield energy sharing policies that outperform the heuristic greedy method.Comment: 38 pages, 10 figure
    corecore