2,826 research outputs found
When Machine Learning Meets Big Data: A Wireless Communication Perspective
We have witnessed an exponential growth in commercial data services, which
has lead to the 'big data era'. Machine learning, as one of the most promising
artificial intelligence tools of analyzing the deluge of data, has been invoked
in many research areas both in academia and industry. The aim of this article
is twin-fold. Firstly, we briefly review big data analysis and machine
learning, along with their potential applications in next-generation wireless
networks. The second goal is to invoke big data analysis to predict the
requirements of mobile users and to exploit it for improving the performance of
"social network-aware wireless". More particularly, a unified big data aided
machine learning framework is proposed, which consists of feature extraction,
data modeling and prediction/online refinement. The main benefits of the
proposed framework are that by relying on big data which reflects both the
spectral and other challenging requirements of the users, we can refine the
motivation, problem formulations and methodology of powerful machine learning
algorithms in the context of wireless networks. In order to characterize the
efficiency of the proposed framework, a pair of intelligent practical
applications are provided as case studies: 1) To predict the positioning of
drone-mounted areal base stations (BSs) according to the specific tele-traffic
requirements by gleaning valuable data from social networks. 2) To predict the
content caching requirements of BSs according to the users' preferences by
mining data from social networks. Finally, open research opportunities are
identified for motivating future investigations.Comment: This article has been accepted by IEEE Vehicular Technology Magazin
Edge Intelligence: Paving the Last Mile of Artificial Intelligence with Edge Computing
With the breakthroughs in deep learning, the recent years have witnessed a
booming of artificial intelligence (AI) applications and services, spanning
from personal assistant to recommendation systems to video/audio surveillance.
More recently, with the proliferation of mobile computing and
Internet-of-Things (IoT), billions of mobile and IoT devices are connected to
the Internet, generating zillions Bytes of data at the network edge. Driving by
this trend, there is an urgent need to push the AI frontiers to the network
edge so as to fully unleash the potential of the edge big data. To meet this
demand, edge computing, an emerging paradigm that pushes computing tasks and
services from the network core to the network edge, has been widely recognized
as a promising solution. The resulted new inter-discipline, edge AI or edge
intelligence, is beginning to receive a tremendous amount of interest. However,
research on edge intelligence is still in its infancy stage, and a dedicated
venue for exchanging the recent advances of edge intelligence is highly desired
by both the computer system and artificial intelligence communities. To this
end, we conduct a comprehensive survey of the recent research efforts on edge
intelligence. Specifically, we first review the background and motivation for
artificial intelligence running at the network edge. We then provide an
overview of the overarching architectures, frameworks and emerging key
technologies for deep learning model towards training/inference at the network
edge. Finally, we discuss future research opportunities on edge intelligence.
We believe that this survey will elicit escalating attentions, stimulate
fruitful discussions and inspire further research ideas on edge intelligence.Comment: Zhi Zhou, Xu Chen, En Li, Liekang Zeng, Ke Luo, and Junshan Zhang,
"Edge Intelligence: Paving the Last Mile of Artificial Intelligence with Edge
Computing," Proceedings of the IEE
Giraffe: Using Deep Reinforcement Learning to Play Chess
This report presents Giraffe, a chess engine that uses self-play to discover
all its domain-specific knowledge, with minimal hand-crafted knowledge given by
the programmer. Unlike previous attempts using machine learning only to perform
parameter-tuning on hand-crafted evaluation functions, Giraffe's learning
system also performs automatic feature extraction and pattern recognition. The
trained evaluation function performs comparably to the evaluation functions of
state-of-the-art chess engines - all of which containing thousands of lines of
carefully hand-crafted pattern recognizers, tuned over many years by both
computer chess experts and human chess masters. Giraffe is the most successful
attempt thus far at using end-to-end machine learning to play chess.Comment: MSc Dissertatio
Bayesian Regression Tree Ensembles that Adapt to Smoothness and Sparsity
Ensembles of decision trees are a useful tool for obtaining for obtaining
flexible estimates of regression functions. Examples of these methods include
gradient boosted decision trees, random forests, and Bayesian CART. Two
potential shortcomings of tree ensembles are their lack of smoothness and
vulnerability to the curse of dimensionality. We show that these issues can be
overcome by instead considering sparsity inducing soft decision trees in which
the decisions are treated as probabilistic. We implement this in the context of
the Bayesian additive regression trees framework, and illustrate its promising
performance through testing on benchmark datasets. We provide strong
theoretical support for our methodology by showing that the posterior
distribution concentrates at the minimax rate (up-to a logarithmic factor) for
sparse functions and functions with additive structures in the high-dimensional
regime where the dimensionality of the covariate space is allowed to grow near
exponentially in the sample size. Our method also adapts to the unknown
smoothness and sparsity levels, and can be implemented by making minimal
modifications to existing BART algorithms.Comment: 47 pages, 8 figure
Applications of Deep Reinforcement Learning in Communications and Networking: A Survey
This paper presents a comprehensive literature review on applications of deep
reinforcement learning in communications and networking. Modern networks, e.g.,
Internet of Things (IoT) and Unmanned Aerial Vehicle (UAV) networks, become
more decentralized and autonomous. In such networks, network entities need to
make decisions locally to maximize the network performance under uncertainty of
network environment. Reinforcement learning has been efficiently used to enable
the network entities to obtain the optimal policy including, e.g., decisions or
actions, given their states when the state and action spaces are small.
However, in complex and large-scale networks, the state and action spaces are
usually large, and the reinforcement learning may not be able to find the
optimal policy in reasonable time. Therefore, deep reinforcement learning, a
combination of reinforcement learning with deep learning, has been developed to
overcome the shortcomings. In this survey, we first give a tutorial of deep
reinforcement learning from fundamental concepts to advanced models. Then, we
review deep reinforcement learning approaches proposed to address emerging
issues in communications and networking. The issues include dynamic network
access, data rate control, wireless caching, data offloading, network security,
and connectivity preservation which are all important to next generation
networks such as 5G and beyond. Furthermore, we present applications of deep
reinforcement learning for traffic routing, resource sharing, and data
collection. Finally, we highlight important challenges, open issues, and future
research directions of applying deep reinforcement learning.Comment: 37 pages, 13 figures, 6 tables, 174 reference paper
Deep Reinforcement Learning Based Mode Selection and Resource Management for Green Fog Radio Access Networks
Fog radio access networks (F-RANs) are seen as potential architectures to
support services of internet of things by leveraging edge caching and edge
computing. However, current works studying resource management in F-RANs mainly
consider a static system with only one communication mode. Given network
dynamics, resource diversity, and the coupling of resource management with mode
selection, resource management in F-RANs becomes very challenging. Motivated by
the recent development of artificial intelligence, a deep reinforcement
learning (DRL) based joint mode selection and resource management approach is
proposed. Each user equipment (UE) can operate either in cloud RAN (C-RAN) mode
or in device-to-device mode, and the resource managed includes both radio
resource and computing resource. The core idea is that the network controller
makes intelligent decisions on UE communication modes and processors' on-off
states with precoding for UEs in C-RAN mode optimized subsequently, aiming at
minimizing long-term system power consumption under the dynamics of edge cache
states. By simulations, the impacts of several parameters, such as learning
rate and edge caching service capability, on system performance are
demonstrated, and meanwhile the proposal is compared with other different
schemes to show its effectiveness. Moreover, transfer learning is integrated
with DRL to accelerate learning process.Comment: 11 pages, 9 figures, accepted to IEEE Internet of Things Journal,
Special Issue on AI-Enabled Cognitive Communicatio
Application of Machine Learning in Wireless Networks: Key Techniques and Open Issues
As a key technique for enabling artificial intelligence, machine learning
(ML) is capable of solving complex problems without explicit programming.
Motivated by its successful applications to many practical tasks like image
recognition, both industry and the research community have advocated the
applications of ML in wireless communication. This paper comprehensively
surveys the recent advances of the applications of ML in wireless
communication, which are classified as: resource management in the MAC layer,
networking and mobility management in the network layer, and localization in
the application layer. The applications in resource management further include
power control, spectrum management, backhaul management, cache management,
beamformer design and computation resource management, while ML based
networking focuses on the applications in clustering, base station switching
control, user association and routing. Moreover, literatures in each aspect is
organized according to the adopted ML techniques. In addition, several
conditions for applying ML to wireless communication are identified to help
readers decide whether to use ML and which kind of ML techniques to use, and
traditional approaches are also summarized together with their performance
comparison with ML based approaches, based on which the motivations of surveyed
literatures to adopt ML are clarified. Given the extensiveness of the research
area, challenges and unresolved issues are presented to facilitate future
studies, where ML based network slicing, infrastructure update to support ML
based paradigms, open data sets and platforms for researchers, theoretical
guidance for ML implementation and so on are discussed.Comment: 34 pages,8 figure
Measuring Software Performance on Linux
Measuring and analyzing the performance of software has reached a high
complexity, caused by more advanced processor designs and the intricate
interaction between user programs, the operating system, and the processor's
microarchitecture. In this report, we summarize our experience about how
performance characteristics of software should be measured when running on a
Linux operating system and a modern processor. In particular, (1) We provide a
general overview about hardware and operating system features that may have a
significant impact on timing and how they interact, (2) we identify sources of
errors that need to be controlled in order to obtain unbiased measurement
results, and (3) we propose a measurement setup for Linux to minimize errors.
Although not the focus of this report, we describe the measurement process
using hardware performance counters, which can faithfully reflect the real
bottlenecks on a given processor. Our experiments confirm that our measurement
setup has a large impact on the results. More surprisingly, however, they also
suggest that the setup can be negligible for certain analysis methods.
Furthermore, we found that our setup maintains significantly better performance
under background load conditions, which means it can be used to improve
software in high-performance applications
Transfer Learning for Future Wireless Networks: A Comprehensive Survey
With outstanding features, Machine Learning (ML) has been the backbone of
numerous applications in wireless networks. However, the conventional ML
approaches have been facing many challenges in practical implementation, such
as the lack of labeled data, the constantly changing wireless environments, the
long training process, and the limited capacity of wireless devices. These
challenges, if not addressed, will impede the effectiveness and applicability
of ML in future wireless networks. To address these problems, Transfer Learning
(TL) has recently emerged to be a very promising solution. The core idea of TL
is to leverage and synthesize distilled knowledge from similar tasks as well as
from valuable experiences accumulated from the past to facilitate the learning
of new problems. Doing so, TL techniques can reduce the dependence on labeled
data, improve the learning speed, and enhance the ML methods' robustness to
different wireless environments. This article aims to provide a comprehensive
survey on applications of TL in wireless networks. Particularly, we first
provide an overview of TL including formal definitions, classification, and
various types of TL techniques. We then discuss diverse TL approaches proposed
to address emerging issues in wireless networks. The issues include spectrum
management, localization, signal recognition, security, human activity
recognition and caching, which are all important to next-generation networks
such as 5G and beyond. Finally, we highlight important challenges, open issues,
and future research directions of TL in future wireless networks
Plan-Structured Deep Neural Network Models for Query Performance Prediction
Query performance prediction, the task of predicting the latency of a query,
is one of the most challenging problem in database management systems. Existing
approaches rely on features and performance models engineered by human experts,
but often fail to capture the complex interactions between query operators and
input relations, and generally do not adapt naturally to workload
characteristics and patterns in query execution plans. In this paper, we argue
that deep learning can be applied to the query performance prediction problem,
and we introduce a novel neural network architecture for the task: a
plan-structured neural network. Our approach eliminates the need for
human-crafted feature selection and automatically discovers complex performance
models both at the operator and query plan level. Our novel neural network
architecture can match the structure of any optimizer-selected query execution
plan and predict its latency with high accuracy. We also propose a number of
optimizations that reduce training overhead without sacrificing effectiveness.
We evaluated our techniques on various workloads and we demonstrate that our
plan-structured neural network can outperform the state-of-the-art in query
performance prediction
- …