1,574 research outputs found
Energy Harvesting Wireless Communications: A Review of Recent Advances
This article summarizes recent contributions in the broad area of energy
harvesting wireless communications. In particular, we provide the current state
of the art for wireless networks composed of energy harvesting nodes, starting
from the information-theoretic performance limits to transmission scheduling
policies and resource allocation, medium access and networking issues. The
emerging related area of energy transfer for self-sustaining energy harvesting
wireless networks is considered in detail covering both energy cooperation
aspects and simultaneous energy and information transfer. Various potential
models with energy harvesting nodes at different network scales are reviewed as
well as models for energy consumption at the nodes.Comment: To appear in the IEEE Journal of Selected Areas in Communications
(Special Issue: Wireless Communications Powered by Energy Harvesting and
Wireless Energy Transfer
Recommended from our members
Optimal Policy Derivation for Transmission Duty-Cycle Constrained LPWAN
Low-power wide-area network (LPWAN) technologies enable Internet of Things (IoT) devices to efficiently and robustly communicate over long distances, thus making them especially suited for industrial environments. However, the stringent regulations on the usage of certain industrial, scientific, and medical bands in many countries in which LPWAN operate limit the amount of time IoT motes can occupy the shared bands. This is particularly challenging in industrial scenarios, where not being able to report some detected events might result in the failure of critical assets. To alleviate this, and by mathematically modeling LPWAN-based IoT motes, we have derived optimal transmission policies that maximize the number of reported events (prioritized by their importance) while still complying with current regulations. The proposed solution has been customized for two widely known LPWAN technologies: 1) LoRa and 2) Sigfox. Analytical results reveal that our solution is feasible and performs remarkably close to the theoretical limit for a wide range of network activity patterns
Spectrum Coordination in Energy Efficient Cognitive Radio Networks
Device coordination in open spectrum systems is a challenging problem,
particularly since users experience varying spectrum availability over time and
location. In this paper, we propose a game theoretical approach that allows
cognitive radio pairs, namely the primary user (PU) and the secondary user
(SU), to update their transmission powers and frequencies simultaneously.
Specifically, we address a Stackelberg game model in which individual users
attempt to hierarchically access to the wireless spectrum while maximizing
their energy efficiency. A thorough analysis of the existence, uniqueness and
characterization of the Stackelberg equilibrium is conducted. In particular, we
show that a spectrum coordination naturally occurs when both actors in the
system decide sequentially about their powers and their transmitting carriers.
As a result, spectrum sensing in such a situation turns out to be a simple
detection of the presence/absence of a transmission on each sub-band. We also
show that when users experience very different channel gains on their two
carriers, they may choose to transmit on the same carrier at the Stackelberg
equilibrium as this contributes enough energy efficiency to outweigh the
interference degradation caused by the mutual transmission. Then, we provide an
algorithmic analysis on how the PU and the SU can reach such a spectrum
coordination using an appropriate learning process. We validate our results
through extensive simulations and compare the proposed algorithm to some
typical scenarios including the non-cooperative case and the
throughput-based-utility systems. Typically, it is shown that the proposed
Stackelberg decision approach optimizes the energy efficiency while still
maximizing the throughput at the equilibrium.Comment: 12 pages, 10 figures, to appear in IEEE Transactions on Vehicular
Technolog
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
Towards Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions
The ever-increasing number of resource-constrained Machine-Type Communication
(MTC) devices is leading to the critical challenge of fulfilling diverse
communication requirements in dynamic and ultra-dense wireless environments.
Among different application scenarios that the upcoming 5G and beyond cellular
networks are expected to support, such as eMBB, mMTC and URLLC, mMTC brings the
unique technical challenge of supporting a huge number of MTC devices, which is
the main focus of this paper. The related challenges include QoS provisioning,
handling highly dynamic and sporadic MTC traffic, huge signalling overhead and
Radio Access Network (RAN) congestion. In this regard, this paper aims to
identify and analyze the involved technical issues, to review recent advances,
to highlight potential solutions and to propose new research directions. First,
starting with an overview of mMTC features and QoS provisioning issues, we
present the key enablers for mMTC in cellular networks. Along with the
highlights on the inefficiency of the legacy Random Access (RA) procedure in
the mMTC scenario, we then present the key features and channel access
mechanisms in the emerging cellular IoT standards, namely, LTE-M and NB-IoT.
Subsequently, we present a framework for the performance analysis of
transmission scheduling with the QoS support along with the issues involved in
short data packet transmission. Next, we provide a detailed overview of the
existing and emerging solutions towards addressing RAN congestion problem, and
then identify potential advantages, challenges and use cases for the
applications of emerging Machine Learning (ML) techniques in ultra-dense
cellular networks. Out of several ML techniques, we focus on the application of
low-complexity Q-learning approach in the mMTC scenarios. Finally, we discuss
some open research challenges and promising future research directions.Comment: 37 pages, 8 figures, 7 tables, submitted for a possible future
publication in IEEE Communications Surveys and Tutorial
- …