22,659 research outputs found

    Markov Decision Processes with Applications in Wireless Sensor Networks: A Survey

    Full text link
    Wireless sensor networks (WSNs) consist of autonomous and resource-limited devices. The devices cooperate to monitor one or more physical phenomena within an area of interest. WSNs operate as stochastic systems because of randomness in the monitored environments. For long service time and low maintenance cost, WSNs require adaptive and robust methods to address data exchange, topology formulation, resource and power optimization, sensing coverage and object detection, and security challenges. In these problems, sensor nodes are to make optimized decisions from a set of accessible strategies to achieve design goals. This survey reviews numerous applications of the Markov decision process (MDP) framework, a powerful decision-making tool to develop adaptive algorithms and protocols for WSNs. Furthermore, various solution methods are discussed and compared to serve as a guide for using MDPs in WSNs

    Lex-Partitioning: A New Option for BDD Search

    Full text link
    For the exploration of large state spaces, symbolic search using binary decision diagrams (BDDs) can save huge amounts of memory and computation time. State sets are represented and modified by accessing and manipulating their characteristic functions. BDD partitioning is used to compute the image as the disjunction of smaller subimages. In this paper, we propose a novel BDD partitioning option. The partitioning is lexicographical in the binary representation of the states contained in the set that is represented by a BDD and uniform with respect to the number of states represented. The motivation of controlling the state set sizes in the partitioning is to eventually bridge the gap between explicit and symbolic search. Let n be the size of the binary state vector. We propose an O(n) ranking and unranking scheme that supports negated edges and operates on top of precomputed satcount values. For the uniform split of a BDD, we then use unranking to provide paths along which we partition the BDDs. In a shared BDD representation the efforts are O(n). The algorithms are fully integrated in the CUDD library and evaluated in strongly solving general game playing benchmarks.Comment: In Proceedings GRAPHITE 2012, arXiv:1210.611

    Algorithms and Conditional Lower Bounds for Planning Problems

    Full text link
    We consider planning problems for graphs, Markov decision processes (MDPs), and games on graphs. While graphs represent the most basic planning model, MDPs represent interaction with nature and games on graphs represent interaction with an adversarial environment. We consider two planning problems where there are k different target sets, and the problems are as follows: (a) the coverage problem asks whether there is a plan for each individual target set, and (b) the sequential target reachability problem asks whether the targets can be reached in sequence. For the coverage problem, we present a linear-time algorithm for graphs and quadratic conditional lower bound for MDPs and games on graphs. For the sequential target problem, we present a linear-time algorithm for graphs, a sub-quadratic algorithm for MDPs, and a quadratic conditional lower bound for games on graphs. Our results with conditional lower bounds establish (i) model-separation results showing that for the coverage problem MDPs and games on graphs are harder than graphs and for the sequential reachability problem games on graphs are harder than MDPs and graphs; (ii) objective-separation results showing that for MDPs the coverage problem is harder than the sequential target problem.Comment: Accepted at ICAPS'1
    corecore