6,755 research outputs found
Markov Decision Processes with Applications in Wireless Sensor Networks: A Survey
Wireless sensor networks (WSNs) consist of autonomous and resource-limited
devices. The devices cooperate to monitor one or more physical phenomena within
an area of interest. WSNs operate as stochastic systems because of randomness
in the monitored environments. For long service time and low maintenance cost,
WSNs require adaptive and robust methods to address data exchange, topology
formulation, resource and power optimization, sensing coverage and object
detection, and security challenges. In these problems, sensor nodes are to make
optimized decisions from a set of accessible strategies to achieve design
goals. This survey reviews numerous applications of the Markov decision process
(MDP) framework, a powerful decision-making tool to develop adaptive algorithms
and protocols for WSNs. Furthermore, various solution methods are discussed and
compared to serve as a guide for using MDPs in WSNs
Resource Allocation in Wireless Networks with RF Energy Harvesting and Transfer
Radio frequency (RF) energy harvesting and transfer techniques have recently
become alternative methods to power the next generation of wireless networks.
As this emerging technology enables proactive replenishment of wireless
devices, it is advantageous in supporting applications with quality-of-service
(QoS) requirement. This article focuses on the resource allocation issues in
wireless networks with RF energy harvesting capability, referred to as RF
energy harvesting networks (RF-EHNs). First, we present an overview of the
RF-EHNs, followed by a review of a variety of issues regarding resource
allocation. Then, we present a case study of designing in the receiver
operation policy, which is of paramount importance in the RF-EHNs. We focus on
QoS support and service differentiation, which have not been addressed by
previous literatures. Furthermore, we outline some open research directions.Comment: To appear in IEEE Networ
Fast-Convergent Learning-aided Control in Energy Harvesting Networks
In this paper, we present a novel learning-aided energy management scheme
() for multihop energy harvesting networks. Different from prior
works on this problem, our algorithm explicitly incorporates information
learning into system control via a step called \emph{perturbed dual learning}.
does not require any statistical information of the system
dynamics for implementation, and efficiently resolves the challenging energy
outage problem. We show that achieves the near-optimal
utility-delay tradeoff with an
energy buffers (). More interestingly,
possesses a \emph{convergence time} of , which is much faster than the time of
pure queue-based techniques or the time of approaches
that rely purely on learning the system statistics. This fast convergence
property makes more adaptive and efficient in resource
allocation in dynamic environments. The design and analysis of
demonstrate how system control algorithms can be augmented by learning and what
the benefits are. The methodology and algorithm can also be applied to similar
problems, e.g., processing networks, where nodes require nonzero amount of
contents to support their actions
Multiple Timescale Energy Scheduling for Wireless Communication with Energy Harvesting Devices
The primary challenge in wireless communication with energy harvesting devices is to efficiently utilize the harvesting energy such that the data packet transmission could be supported. This challenge stems from not only QoS requirement imposed by the wireless communication application, but also the energy harvesting dynamics and the limited battery capacity. Traditional solar predictable energy harvesting models are perturbed by prediction errors, which could deteriorate the energy management algorithms based on this models. To cope with these issues, we first propose in this paper a non-homogenous Markov chain model based on experimental data, which can accurately describe the solar energy harvesting process in contrast to traditional predictable energy models. Due to different timescale between the energy harvesting process and the wireless data transmission process, we propose a general framework of multiple timescale Markov decision process (MMDP) model to formulate the joint energy scheduling and transmission control problem under different timescales. We then derive the optimal control policies via a joint dynamic programming and value iteration approach. Extensive simulations are carried out to study the performances of the proposed schemes
Efficient energy management for the internet of things in smart cities
The drastic increase in urbanization over the past few years requires sustainable, efficient, and smart solutions for transportation, governance, environment, quality of life, and so on. The Internet of Things offers many sophisticated and ubiquitous applications for smart cities. The energy demand of IoT applications is increased, while IoT devices continue to grow in both numbers and requirements. Therefore, smart city solutions must have the ability to efficiently utilize energy and handle the associated challenges. Energy management is considered as a key paradigm for the realization of complex energy systems in smart cities. In this article, we present a brief overview of energy management and challenges in smart cities. We then provide a unifying framework for energy-efficient optimization and scheduling of IoT-based smart cities. We also discuss the energy harvesting in smart cities, which is a promising solution for extending the lifetime of low-power devices and its related challenges. We detail two case studies. The first one targets energy-efficient scheduling in smart homes, and the second covers wireless power transfer for IoT devices in smart cities. Simulation results for the case studies demonstrate the tremendous impact of energy-efficient scheduling optimization and wireless power transfer on the performance of IoT in smart cities
- …