1,467 research outputs found
Scaling Configuration of Energy Harvesting Sensors with Reinforcement Learning
With the advent of the Internet of Things (IoT), an increasing number of
energy harvesting methods are being used to supplement or supplant battery
based sensors. Energy harvesting sensors need to be configured according to the
application, hardware, and environmental conditions to maximize their
usefulness. As of today, the configuration of sensors is either manual or
heuristics based, requiring valuable domain expertise. Reinforcement learning
(RL) is a promising approach to automate configuration and efficiently scale
IoT deployments, but it is not yet adopted in practice. We propose solutions to
bridge this gap: reduce the training phase of RL so that nodes are operational
within a short time after deployment and reduce the computational requirements
to scale to large deployments. We focus on configuration of the sampling rate
of indoor solar panel based energy harvesting sensors. We created a simulator
based on 3 months of data collected from 5 sensor nodes subject to different
lighting conditions. Our simulation results show that RL can effectively learn
energy availability patterns and configure the sampling rate of the sensor
nodes to maximize the sensing data while ensuring that energy storage is not
depleted. The nodes can be operational within the first day by using our
methods. We show that it is possible to reduce the number of RL policies by
using a single policy for nodes that share similar lighting conditions.Comment: 7 pages, 5 figure
Markov Decision Processes with Applications in Wireless Sensor Networks: A Survey
Wireless sensor networks (WSNs) consist of autonomous and resource-limited
devices. The devices cooperate to monitor one or more physical phenomena within
an area of interest. WSNs operate as stochastic systems because of randomness
in the monitored environments. For long service time and low maintenance cost,
WSNs require adaptive and robust methods to address data exchange, topology
formulation, resource and power optimization, sensing coverage and object
detection, and security challenges. In these problems, sensor nodes are to make
optimized decisions from a set of accessible strategies to achieve design
goals. This survey reviews numerous applications of the Markov decision process
(MDP) framework, a powerful decision-making tool to develop adaptive algorithms
and protocols for WSNs. Furthermore, various solution methods are discussed and
compared to serve as a guide for using MDPs in WSNs
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
Machine Learning in Wireless Sensor Networks for Smart Cities:A Survey
Artificial intelligence (AI) and machine learning (ML) techniques have huge potential to efficiently manage the automated operation of the internet of things (IoT) nodes deployed in smart cities. In smart cities, the major IoT applications are smart traffic monitoring, smart waste management, smart buildings and patient healthcare monitoring. The small size IoT nodes based on low power Bluetooth (IEEE 802.15.1) standard and wireless sensor networks (WSN) (IEEE 802.15.4) standard are generally used for transmission of data to a remote location using gateways. The WSN based IoT (WSN-IoT) design problems include network coverage and connectivity issues, energy consumption, bandwidth requirement, network lifetime maximization, communication protocols and state of the art infrastructure. In this paper, the authors propose machine learning methods as an optimization tool for regular WSN-IoT nodes deployed in smart city applications. As per the author’s knowledge, this is the first in-depth literature survey of all ML techniques in the field of low power consumption WSN-IoT for smart cities. The results of this unique survey article show that the supervised learning algorithms have been most widely used (61%) as compared to reinforcement learning (27%) and unsupervised learning (12%) for smart city applications
Decentralized Delay Optimal Control for Interference Networks with Limited Renewable Energy Storage
In this paper, we consider delay minimization for interference networks with
renewable energy source, where the transmission power of a node comes from both
the conventional utility power (AC power) and the renewable energy source. We
assume the transmission power of each node is a function of the local channel
state, local data queue state and local energy queue state only. In turn, we
consider two delay optimization formulations, namely the decentralized
partially observable Markov decision process (DEC-POMDP) and Non-cooperative
partially observable stochastic game (POSG). In DEC-POMDP formulation, we
derive a decentralized online learning algorithm to determine the control
actions and Lagrangian multipliers (LMs) simultaneously, based on the policy
gradient approach. Under some mild technical conditions, the proposed
decentralized policy gradient algorithm converges almost surely to a local
optimal solution. On the other hand, in the non-cooperative POSG formulation,
the transmitter nodes are non-cooperative. We extend the decentralized policy
gradient solution and establish the technical proof for almost-sure convergence
of the learning algorithms. In both cases, the solutions are very robust to
model variations. Finally, the delay performance of the proposed solutions are
compared with conventional baseline schemes for interference networks and it is
illustrated that substantial delay performance gain and energy savings can be
achieved
Adaptive Algorithms for Batteryless LoRa-Based Sensors
Ambient energy-powered sensors are becoming increasingly crucial for the sustainability of the Internet-of-Things (IoT). In particular, batteryless sensors are a cost-effective solution that require no battery maintenance, last longer and have greater weatherproofing properties due to the lack of a battery access panel. In this work, we study adaptive transmission algorithms to improve the performance of batteryless IoT sensors based on the LoRa protocol. First, we characterize the device power consumption during sensor measurement and/or transmission events. Then, we consider different scenarios and dynamically tune the most critical network parameters, such as inter-packet transmission time, data redundancy and packet size, to optimize the operation of the device. We design appropriate capacity-based storage, considering a renewable energy source (e.g., photovoltaic panel), and we analyze the probability of energy failures by exploiting both theoretical models and real energy traces. The results can be used as feedback to re-design the device to have an appropriate amount energy storage and meet certain reliability constraints. Finally, a cost analysis is also provided for the energy characteristics of our system, taking into account the dimensioning of both the capacitor and solar panel
Energy Sharing for Multiple Sensor Nodes with Finite Buffers
We consider the problem of finding optimal energy sharing policies that
maximize the network performance of a system comprising of multiple sensor
nodes and a single energy harvesting (EH) source. Sensor nodes periodically
sense the random field and generate data, which is stored in the corresponding
data queues. The EH source harnesses energy from ambient energy sources and the
generated energy is stored in an energy buffer. Sensor nodes receive energy for
data transmission from the EH source. The EH source has to efficiently share
the stored energy among the nodes in order to minimize the long-run average
delay in data transmission. We formulate the problem of energy sharing between
the nodes in the framework of average cost infinite-horizon Markov decision
processes (MDPs). We develop efficient energy sharing algorithms, namely
Q-learning algorithm with exploration mechanisms based on the -greedy
method as well as upper confidence bound (UCB). We extend these algorithms by
incorporating state and action space aggregation to tackle state-action space
explosion in the MDP. We also develop a cross entropy based method that
incorporates policy parameterization in order to find near optimal energy
sharing policies. Through simulations, we show that our algorithms yield energy
sharing policies that outperform the heuristic greedy method.Comment: 38 pages, 10 figure
- …