960 research outputs found
A Cross-layer Perspective on Energy Harvesting Aided Green Communications over Fading Channels
We consider the power allocation of the physical layer and the buffer delay
of the upper application layer in energy harvesting green networks. The total
power required for reliable transmission includes the transmission power and
the circuit power. The harvested power (which is stored in a battery) and the
grid power constitute the power resource. The uncertainty of data generated
from the upper layer, the intermittence of the harvested energy, and the
variation of the fading channel are taken into account and described as
independent Markov processes. In each transmission, the transmitter decides the
transmission rate as well as the allocated power from the battery, and the rest
of the required power will be supplied by the power grid. The objective is to
find an allocation sequence of transmission rate and battery power to minimize
the long-term average buffer delay under the average grid power constraint. A
stochastic optimization problem is formulated accordingly to find such
transmission rate and battery power sequence. Furthermore, the optimization
problem is reformulated as a constrained MDP problem whose policy is a
two-dimensional vector with the transmission rate and the power allocation of
the battery as its elements. We prove that the optimal policy of the
constrained MDP can be obtained by solving the unconstrained MDP. Then we focus
on the analysis of the unconstrained average-cost MDP. The structural
properties of the average optimal policy are derived. Moreover, we discuss the
relations between elements of the two-dimensional policy. Next, based on the
theoretical analysis, the algorithm to find the constrained optimal policy is
presented for the finite state space scenario. In addition, heuristic policies
with low-complexity are given for the general state space. Finally, simulations
are performed under these policies to demonstrate the effectiveness
Markov Decision Processes with Applications in Wireless Sensor Networks: A Survey
Wireless sensor networks (WSNs) consist of autonomous and resource-limited
devices. The devices cooperate to monitor one or more physical phenomena within
an area of interest. WSNs operate as stochastic systems because of randomness
in the monitored environments. For long service time and low maintenance cost,
WSNs require adaptive and robust methods to address data exchange, topology
formulation, resource and power optimization, sensing coverage and object
detection, and security challenges. In these problems, sensor nodes are to make
optimized decisions from a set of accessible strategies to achieve design
goals. This survey reviews numerous applications of the Markov decision process
(MDP) framework, a powerful decision-making tool to develop adaptive algorithms
and protocols for WSNs. Furthermore, various solution methods are discussed and
compared to serve as a guide for using MDPs in WSNs
Discrete-time controlled markov processes with average cost criterion: a survey
This work is a survey of the average cost control problem for discrete-time Markov processes. The authors have attempted to put together a comprehensive account of the considerable research on this problem over the past three decades. The exposition ranges from finite to Borel state and action spaces and includes a variety of methodologies to find and characterize optimal policies. The authors have included a brief historical perspective of the research efforts in this area and have compiled a substantial yet not exhaustive bibliography. The authors have also identified several important questions that are still open to investigation
- …