1,363 research outputs found
Deep Reinforcement Learning for Real-Time Optimization in NB-IoT Networks
NarrowBand-Internet of Things (NB-IoT) is an emerging cellular-based
technology that offers a range of flexible configurations for massive IoT radio
access from groups of devices with heterogeneous requirements. A configuration
specifies the amount of radio resource allocated to each group of devices for
random access and for data transmission. Assuming no knowledge of the traffic
statistics, there exists an important challenge in "how to determine the
configuration that maximizes the long-term average number of served IoT devices
at each Transmission Time Interval (TTI) in an online fashion". Given the
complexity of searching for optimal configuration, we first develop real-time
configuration selection based on the tabular Q-learning (tabular-Q), the Linear
Approximation based Q-learning (LA-Q), and the Deep Neural Network based
Q-learning (DQN) in the single-parameter single-group scenario. Our results
show that the proposed reinforcement learning based approaches considerably
outperform the conventional heuristic approaches based on load estimation
(LE-URC) in terms of the number of served IoT devices. This result also
indicates that LA-Q and DQN can be good alternatives for tabular-Q to achieve
almost the same performance with much less training time. We further advance
LA-Q and DQN via Actions Aggregation (AA-LA-Q and AA-DQN) and via Cooperative
Multi-Agent learning (CMA-DQN) for the multi-parameter multi-group scenario,
thereby solve the problem that Q-learning agents do not converge in
high-dimensional configurations. In this scenario, the superiority of the
proposed Q-learning approaches over the conventional LE-URC approach
significantly improves with the increase of configuration dimensions, and the
CMA-DQN approach outperforms the other approaches in both throughput and
training efficiency
Learning and Management for Internet-of-Things: Accounting for Adaptivity and Scalability
Internet-of-Things (IoT) envisions an intelligent infrastructure of networked
smart devices offering task-specific monitoring and control services. The
unique features of IoT include extreme heterogeneity, massive number of
devices, and unpredictable dynamics partially due to human interaction. These
call for foundational innovations in network design and management. Ideally, it
should allow efficient adaptation to changing environments, and low-cost
implementation scalable to massive number of devices, subject to stringent
latency constraints. To this end, the overarching goal of this paper is to
outline a unified framework for online learning and management policies in IoT
through joint advances in communication, networking, learning, and
optimization. From the network architecture vantage point, the unified
framework leverages a promising fog architecture that enables smart devices to
have proximity access to cloud functionalities at the network edge, along the
cloud-to-things continuum. From the algorithmic perspective, key innovations
target online approaches adaptive to different degrees of nonstationarity in
IoT dynamics, and their scalable model-free implementation under limited
feedback that motivates blind or bandit approaches. The proposed framework
aspires to offer a stepping stone that leads to systematic designs and analysis
of task-specific learning and management schemes for IoT, along with a host of
new research directions to build on.Comment: Submitted on June 15 to Proceeding of IEEE Special Issue on Adaptive
and Scalable Communication Network
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
Decentralized Differentially Private Without-Replacement Stochastic Gradient Descent
While machine learning has achieved remarkable results in a wide variety of
domains, the training of models often requires large datasets that may need to
be collected from different individuals. As sensitive information may be
contained in the individual's dataset, sharing training data may lead to severe
privacy concerns. Therefore, there is a compelling need to develop
privacy-aware machine learning methods, for which one effective approach is to
leverage the generic framework of differential privacy. Considering that
stochastic gradient descent (SGD) is one of the mostly adopted methods for
large-scale machine learning problems, two decentralized differentially private
SGD algorithms are proposed in this work. Particularly, we focus on SGD without
replacement due to its favorable structure for practical implementation. In
addition, both privacy and convergence analysis are provided for the proposed
algorithms. Finally, extensive experiments are performed to verify the
theoretical results and demonstrate the effectiveness of the proposed
algorithms
A Systematic Review of LPWAN and Short-Range Network using AI to Enhance Internet of Things
Artificial intelligence (AI) has recently been used frequently, especially concerning the Internet of Things (IoT). However, IoT devices cannot work alone, assisted by Low Power Wide Area Network (LPWAN) for long-distance communication and Short-Range Network for a short distance. However, few reviews about AI can help LPWAN and Short-Range Network. Therefore, the author took the opportunity to do this review. This study aims to review LPWAN and Short-Range Networks AI papers in systematically enhancing IoT performance. Reviews are also used to systematically maximize LPWAN systems and Short-Range networks to enhance IoT quality and discuss results that can be applied to a specific scope. The author utilizes selected reporting items for systematic review and meta-analysis (PRISMA). The authors conducted a systematic review of all study results in support of the authors' objectives. Also, the authors identify development and related study opportunities. The author found 79 suitable papers in this systematic review, so a discussion of the presented papers was carried out. Several technologies are widely used, such as LPWAN in general, with several papers originating from China. Many reports from conferences last year and papers related to this matter were from 2020-2021. The study is expected to inspire experimental studies in finding relevant scientific papers and become another review
- …