4,292 research outputs found

    Sequential Detection with Mutual Information Stopping Cost

    Full text link
    This paper formulates and solves a sequential detection problem that involves the mutual information (stochastic observability) of a Gaussian process observed in noise with missing measurements. The main result is that the optimal decision is characterized by a monotone policy on the partially ordered set of positive definite covariance matrices. This monotone structure implies that numerically efficient algorithms can be designed to estimate and implement monotone parametrized decision policies.The sequential detection problem is motivated by applications in radar scheduling where the aim is to maintain the mutual information of all targets within a specified bound. We illustrate the problem formulation and performance of monotone parametrized policies via numerical examples in fly-by and persistent-surveillance applications involving a GMTI (Ground Moving Target Indicator) radar

    Markov Decision Processes with Applications in Wireless Sensor Networks: A Survey

    Full text link
    Wireless sensor networks (WSNs) consist of autonomous and resource-limited devices. The devices cooperate to monitor one or more physical phenomena within an area of interest. WSNs operate as stochastic systems because of randomness in the monitored environments. For long service time and low maintenance cost, WSNs require adaptive and robust methods to address data exchange, topology formulation, resource and power optimization, sensing coverage and object detection, and security challenges. In these problems, sensor nodes are to make optimized decisions from a set of accessible strategies to achieve design goals. This survey reviews numerous applications of the Markov decision process (MDP) framework, a powerful decision-making tool to develop adaptive algorithms and protocols for WSNs. Furthermore, various solution methods are discussed and compared to serve as a guide for using MDPs in WSNs

    A Distributed ADMM Approach to Non-Myopic Path Planning for Multi-Target Tracking

    Full text link
    This paper investigates non-myopic path planning of mobile sensors for multi-target tracking. Such problem has posed a high computational complexity issue and/or the necessity of high-level decision making. Existing works tackle these issues by heuristically assigning targets to each sensing agent and solving the split problem for each agent. However, such heuristic methods reduce the target estimation performance in the absence of considering the changes of target state estimation along time. In this work, we detour the task-assignment problem by reformulating the general non-myopic planning problem to a distributed optimization problem with respect to targets. By combining alternating direction method of multipliers (ADMM) and local trajectory optimization method, we solve the problem and induce consensus (i.e., high-level decisions) automatically among the targets. In addition, we propose a modified receding-horizon control (RHC) scheme and edge-cutting method for efficient real-time operation. The proposed algorithm is validated through simulations in various scenarios.Comment: Copyright 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other work

    Stochastic Sensor Scheduling via Distributed Convex Optimization

    Full text link
    In this paper, we propose a stochastic scheduling strategy for estimating the states of N discrete-time linear time invariant (DTLTI) dynamic systems, where only one system can be observed by the sensor at each time instant due to practical resource constraints. The idea of our stochastic strategy is that a system is randomly selected for observation at each time instant according to a pre-assigned probability distribution. We aim to find the optimal pre-assigned probability in order to minimize the maximal estimate error covariance among dynamic systems. We first show that under mild conditions, the stochastic scheduling problem gives an upper bound on the performance of the optimal sensor selection problem, notoriously difficult to solve. We next relax the stochastic scheduling problem into a tractable suboptimal quasi-convex form. We then show that the new problem can be decomposed into coupled small convex optimization problems, and it can be solved in a distributed fashion. Finally, for scheduling implementation, we propose centralized and distributed deterministic scheduling strategies based on the optimal stochastic solution and provide simulation examples.Comment: Proof errors and typos are fixed. One section is removed from last versio

    Policy Rollout Action Selection in Continuous Domains for Sensor Path Planning

    Get PDF
    corecore