2,749 research outputs found

    Markov Decision Processes with Applications in Wireless Sensor Networks: A Survey

    Full text link
    Wireless sensor networks (WSNs) consist of autonomous and resource-limited devices. The devices cooperate to monitor one or more physical phenomena within an area of interest. WSNs operate as stochastic systems because of randomness in the monitored environments. For long service time and low maintenance cost, WSNs require adaptive and robust methods to address data exchange, topology formulation, resource and power optimization, sensing coverage and object detection, and security challenges. In these problems, sensor nodes are to make optimized decisions from a set of accessible strategies to achieve design goals. This survey reviews numerous applications of the Markov decision process (MDP) framework, a powerful decision-making tool to develop adaptive algorithms and protocols for WSNs. Furthermore, various solution methods are discussed and compared to serve as a guide for using MDPs in WSNs

    Multiple Timescale Energy Scheduling for Wireless Communication with Energy Harvesting Devices

    Get PDF
    The primary challenge in wireless communication with energy harvesting devices is to efficiently utilize the harvesting energy such that the data packet transmission could be supported. This challenge stems from not only QoS requirement imposed by the wireless communication application, but also the energy harvesting dynamics and the limited battery capacity. Traditional solar predictable energy harvesting models are perturbed by prediction errors, which could deteriorate the energy management algorithms based on this models. To cope with these issues, we first propose in this paper a non-homogenous Markov chain model based on experimental data, which can accurately describe the solar energy harvesting process in contrast to traditional predictable energy models. Due to different timescale between the energy harvesting process and the wireless data transmission process, we propose a general framework of multiple timescale Markov decision process (MMDP) model to formulate the joint energy scheduling and transmission control problem under different timescales. We then derive the optimal control policies via a joint dynamic programming and value iteration approach. Extensive simulations are carried out to study the performances of the proposed schemes

    Resource Allocation in Wireless Networks with RF Energy Harvesting and Transfer

    Full text link
    Radio frequency (RF) energy harvesting and transfer techniques have recently become alternative methods to power the next generation of wireless networks. As this emerging technology enables proactive replenishment of wireless devices, it is advantageous in supporting applications with quality-of-service (QoS) requirement. This article focuses on the resource allocation issues in wireless networks with RF energy harvesting capability, referred to as RF energy harvesting networks (RF-EHNs). First, we present an overview of the RF-EHNs, followed by a review of a variety of issues regarding resource allocation. Then, we present a case study of designing in the receiver operation policy, which is of paramount importance in the RF-EHNs. We focus on QoS support and service differentiation, which have not been addressed by previous literatures. Furthermore, we outline some open research directions.Comment: To appear in IEEE Networ

    Energy Sharing for Multiple Sensor Nodes with Finite Buffers

    Full text link
    We consider the problem of finding optimal energy sharing policies that maximize the network performance of a system comprising of multiple sensor nodes and a single energy harvesting (EH) source. Sensor nodes periodically sense the random field and generate data, which is stored in the corresponding data queues. The EH source harnesses energy from ambient energy sources and the generated energy is stored in an energy buffer. Sensor nodes receive energy for data transmission from the EH source. The EH source has to efficiently share the stored energy among the nodes in order to minimize the long-run average delay in data transmission. We formulate the problem of energy sharing between the nodes in the framework of average cost infinite-horizon Markov decision processes (MDPs). We develop efficient energy sharing algorithms, namely Q-learning algorithm with exploration mechanisms based on the ϵ\epsilon-greedy method as well as upper confidence bound (UCB). We extend these algorithms by incorporating state and action space aggregation to tackle state-action space explosion in the MDP. We also develop a cross entropy based method that incorporates policy parameterization in order to find near optimal energy sharing policies. Through simulations, we show that our algorithms yield energy sharing policies that outperform the heuristic greedy method.Comment: 38 pages, 10 figure
    corecore