1,463 research outputs found
Markov Decision Processes with Applications in Wireless Sensor Networks: A Survey
Wireless sensor networks (WSNs) consist of autonomous and resource-limited
devices. The devices cooperate to monitor one or more physical phenomena within
an area of interest. WSNs operate as stochastic systems because of randomness
in the monitored environments. For long service time and low maintenance cost,
WSNs require adaptive and robust methods to address data exchange, topology
formulation, resource and power optimization, sensing coverage and object
detection, and security challenges. In these problems, sensor nodes are to make
optimized decisions from a set of accessible strategies to achieve design
goals. This survey reviews numerous applications of the Markov decision process
(MDP) framework, a powerful decision-making tool to develop adaptive algorithms
and protocols for WSNs. Furthermore, various solution methods are discussed and
compared to serve as a guide for using MDPs in WSNs
Decentralized Adaptive Helper Selection in Multi-channel P2P Streaming Systems
In Peer-to-Peer (P2P) multichannel live streaming, helper peers with surplus
bandwidth resources act as micro-servers to compensate the server deficiencies
in balancing the resources between different channel overlays. With deployment
of helper level between server and peers, optimizing the user/helper topology
becomes a challenging task since applying well-known reciprocity-based choking
algorithms is impossible due to the one-directional nature of video streaming
from helpers to users. Because of selfish behavior of peers and lack of central
authority among them, selection of helpers requires coordination. In this
paper, we design a distributed online helper selection mechanism which is
adaptable to supply and demand pattern of various video channels. Our solution
for strategic peers' exploitation from the shared resources of helpers is to
guarantee the convergence to correlated equilibria (CE) among the helper
selection strategies. Online convergence to the set of CE is achieved through
the regret-tracking algorithm which tracks the equilibrium in the presence of
stochastic dynamics of helpers' bandwidth. The resulting CE can help us select
proper cooperation policies. Simulation results demonstrate that our algorithm
achieves good convergence, load distribution on helpers and sustainable
streaming rates for peers
Advances in Reinforcement Learning
Reinforcement Learning (RL) is a very dynamic area in terms of theory and application. This book brings together many different aspects of the current research on several fields associated to RL which has been growing rapidly, producing a wide variety of learning algorithms for different applications. Based on 24 Chapters, it covers a very broad variety of topics in RL and their application in autonomous systems. A set of chapters in this book provide a general overview of RL while other chapters focus mostly on the applications of RL paradigms: Game Theory, Multi-Agent Theory, Robotic, Networking Technologies, Vehicular Navigation, Medicine and Industrial Logistic
Markov decision processes with applications in wireless sensor networks: A survey
Ministry of Education, Singapore under its Academic Research Funding Tier
- …