76,930 research outputs found
Machine Learning for Vehicular Networks
The emerging vehicular networks are expected to make everyday vehicular
operation safer, greener, and more efficient, and pave the path to autonomous
driving in the advent of the fifth generation (5G) cellular system. Machine
learning, as a major branch of artificial intelligence, has been recently
applied to wireless networks to provide a data-driven approach to solve
traditionally challenging problems. In this article, we review recent advances
in applying machine learning in vehicular networks and attempt to bring more
attention to this emerging area. After a brief overview of the major concept of
machine learning, we present some application examples of machine learning in
solving problems arising in vehicular networks. We finally discuss and
highlight several open issues that warrant further research.Comment: Accepted by IEEE Vehicular Technology Magazin
New Trends in Parallel and Distributed Simulation: from Many-Cores to Cloud Computing
Recent advances in computing architectures and networking are bringing
parallel computing systems to the masses so increasing the number of potential
users of these kinds of systems. In particular, two important technological
evolutions are happening at the ends of the computing spectrum: at the "small"
scale, processors now include an increasing number of independent execution
units (cores), at the point that a mere CPU can be considered a parallel
shared-memory computer; at the "large" scale, the Cloud Computing paradigm
allows applications to scale by offering resources from a large pool on a
pay-as-you-go model. Multi-core processors and Clouds both require applications
to be suitably modified to take advantage of the features they provide. In this
paper, we analyze the state of the art of parallel and distributed simulation
techniques, and assess their applicability to multi-core architectures or
Clouds. It turns out that most of the current approaches exhibit limitations in
terms of usability and adaptivity which may hinder their application to these
new computing architectures. We propose an adaptive simulation mechanism, based
on the multi-agent system paradigm, to partially address some of those
limitations. While it is unlikely that a single approach will work well on both
settings above, we argue that the proposed adaptive mechanism has useful
features which make it attractive both in a multi-core processor and in a Cloud
system. These features include the ability to reduce communication costs by
migrating simulation components, and the support for adding (or removing) nodes
to the execution architecture at runtime. We will also show that, with the help
of an additional support layer, parallel and distributed simulations can be
executed on top of unreliable resources.Comment: Simulation Modelling Practice and Theory (SIMPAT), Elsevier, vol. 49
(December 2014
Deep Learning for Frame Error Prediction using a DARPA Spectrum Collaboration Challenge (SC2) Dataset
We demonstrate a first example for employing deep learning in predicting
frame errors for a Collaborative Intelligent Radio Network (CIRN) using a
dataset collected during participation in the final scrimmages of the DARPA SC2
challenge. Four scenarios are considered based on randomizing or fixing the
strategy for bandwidth and channel allocation, and either training and testing
with different links or using a pilot phase for each link to train the deep
neural network. We also investigate the effect of latency constraints, and
uncover interesting characteristics of the predictor over different Signal to
Noise Ratio (SNR) ranges. The obtained insights open the door for implementing
a deep-learning-based strategy that is scalable to large heterogeneous
networks, generalizable to diverse wireless environments, and suitable for
predicting frame error instances and rates within a congested shared spectrum.Comment: 5 pages, 4 figure
Applications of Deep Reinforcement Learning in Communications and Networking: A Survey
This paper presents a comprehensive literature review on applications of deep
reinforcement learning in communications and networking. Modern networks, e.g.,
Internet of Things (IoT) and Unmanned Aerial Vehicle (UAV) networks, become
more decentralized and autonomous. In such networks, network entities need to
make decisions locally to maximize the network performance under uncertainty of
network environment. Reinforcement learning has been efficiently used to enable
the network entities to obtain the optimal policy including, e.g., decisions or
actions, given their states when the state and action spaces are small.
However, in complex and large-scale networks, the state and action spaces are
usually large, and the reinforcement learning may not be able to find the
optimal policy in reasonable time. Therefore, deep reinforcement learning, a
combination of reinforcement learning with deep learning, has been developed to
overcome the shortcomings. In this survey, we first give a tutorial of deep
reinforcement learning from fundamental concepts to advanced models. Then, we
review deep reinforcement learning approaches proposed to address emerging
issues in communications and networking. The issues include dynamic network
access, data rate control, wireless caching, data offloading, network security,
and connectivity preservation which are all important to next generation
networks such as 5G and beyond. Furthermore, we present applications of deep
reinforcement learning for traffic routing, resource sharing, and data
collection. Finally, we highlight important challenges, open issues, and future
research directions of applying deep reinforcement learning.Comment: 37 pages, 13 figures, 6 tables, 174 reference paper
Towards QoS-Aware and Resource-Efficient GPU Microservices Based on Spatial Multitasking GPUs In Datacenters
While prior researches focus on CPU-based microservices, they are not
applicable for GPU-based microservices due to the different contention
patterns. It is challenging to optimize the resource utilization while
guaranteeing the QoS for GPU microservices. We find that the overhead is caused
by inter microservice communication, GPU resource contention and imbalanced
throughput within microservice pipeline. We propose Camelot, a runtime system
that manages GPU micorservices considering the above factors. In Camelot, a
global memory-based communication mechanism enables onsite data sharing that
significantly reduces the end-to-end latencies of user queries. We also propose
two contention aware resource allocation policies that either maximize the peak
supported service load or minimize the resource usage at low load while
ensuring the required QoS. The two policies consider the microservice pipeline
effect and the runtime GPU resource contention when allocating resources for
the microservices. Compared with state-of-the-art work, Camelot increases the
supported peak load by up to 64.5% with limited GPUs, and reduces 35% resource
usage at low load while achieving the desired 99%-ile latency target.Comment: 13 page
Unix Memory Allocations are Not Poisson
In multitasking operating systems, requests for free memory are traditionally
modeled as a stochastic counting process with independent,
exponentially-distributed interarrival times because of the analytic simplicity
such Poisson models afford. We analyze the distribution of several million unix
page commits to show that although this approach could be valid over relatively
long timespans, the behavior of the arrival process over shorter periods is
decidedly not Poisson. We find that this result holds regardless of the
originator of the request: unlike network packets, there is little difference
between system- and user-level page-request distributions. We believe this to
be due to the bursty nature of page allocations, which tend to occur in either
small or extremely large increments. Burstiness and persistent variance have
recently been found in self-similar processes in computer networks, but we show
that although page commits are both bursty and possess high variance over long
timescales, they are probably not self-similar. These results suggest that
altogether different models are needed for fine-grained analysis of memory
systems, an important consideration not only for understanding behavior but
also for the design of online control systems
Extracting and Exploiting Inherent Sparsity for Efficient IoT Support in 5G: Challenges and Potential Solutions
Besides enabling an enhanced mobile broadband, next generation of mobile
networks (5G) are envisioned for the support of massive connectivity of
heterogeneous Internet of Things (IoT)s. These IoTs are envisioned for a large
number of use-cases including smart cities, environment monitoring, smart
vehicles, etc. Unfortunately, most IoTs have very limited computing and storage
capabilities and need cloud services. Hence, connecting these devices through
5G systems requires huge spectrum resources in addition to handling the massive
connectivity and improved security. This article discusses the challenges
facing the support of IoTs through 5G systems. The focus is devoted to
discussing physical layer limitations in terms of spectrum resources and radio
access channel connectivity. We show how sparsity can be exploited for
addressing these challenges especially in terms of enabling wideband spectrum
management and handling the connectivity by exploiting device-to-device
communications and edge-cloud. Moreover, we identify major open problems and
research directions that need to be explored towards enabling the support of
massive heterogeneous IoTs through 5G systems.Comment: Accepted for publication in IEEE Wireless Communications Magazin
Distributed Hierarchical Control versus an Economic Model for Cloud Resource Management
We investigate a hierarchically organized cloud infrastructure and compare
distributed hierarchical control based on resource monitoring with market
mechanisms for resource management. The latter do not require a model of the
system, incur a low overhead, are robust, and satisfy several other desiderates
of autonomic computing. We introduce several performance measures and report on
simulation studies which show that a straightforward bidding scheme supports an
effective admission control mechanism, while reducing the communication
complexity by several orders of magnitude and also increasing the acceptance
rate compared to hierarchical control and monitoring mechanisms. Resource
management based on market-based mechanisms can be seen as an intermediate step
towards cloud self-organization, an ideal alternative to current mechanisms for
cloud resource management.Comment: 13 pages, 4 figure
Self-enforcing Game Theory-based Resource Allocation for LoRaWAN Assisted Public Safety Communications
Public safety networks avail to disseminate information during emergency
situations through its dedicated servers. Public safety networks accommodate
public safety communication (PSC) applications to track the location of its
utilizers and enable to sustain transmissions even in the crucial scenarios.
Despite that, if the traditional setups responsible for PSCs are unavailable,
it becomes prodigiously arduous to handle any of the safety applications, which
may cause havoc in the society. Dependence on a secondary network may assist to
solve such an issue. But, the secondary networks should be facilely deployable
and must not cause exorbitant overheads in terms of cost and operation. For
this, LoRaWAN can be considered as an ideal solution as it provides low power
and long-range communication. However, an excessive utilization of the
secondary network may result in high depletion of its own resources and can
lead to a complete shutdown of services, which is a quandary at hand. As a
solution, this paper proposes a novel network model via a combination of
LoRaWAN and traditional public safety networks, and uses a self-enforcing
agreement based game theory for allocating resources efficiently amongst the
available servers. The proposed approach adopts memory and energy constraints
as agreements, which are satisfied through Nash equilibrium. The numerical
results show that the proposed approach is capable of efficiently allocating
the resources with sufficiently high gains for resource conservation, network
sustainability, resource restorations and probability to continue at the
present conditions even in the complete absence of traditional Access Points
(APs) compared with a baseline scenario with no failure of nodes.Comment: 16 Pages, 11 Figures, 2 Table
A Survey on Low Latency Towards 5G: RAN, Core Network and Caching Solutions
The fifth generation (5G) wireless network technology is to be standardized
by 2020, where main goals are to improve capacity, reliability, and energy
efficiency, while reducing latency and massively increasing connection density.
An integral part of 5G is the capability to transmit touch perception type
real-time communication empowered by applicable robotics and haptics equipment
at the network edge. In this regard, we need drastic changes in network
architecture including core and radio access network (RAN) for achieving
end-to-end latency on the order of 1 ms. In this paper, we present a detailed
survey on the emerging technologies to achieve low latency communications
considering three different solution domains: RAN, core network, and caching.
We also present a general overview of 5G cellular networks composed of software
defined network (SDN), network function virtualization (NFV), caching, and
mobile edge computing (MEC) capable of meeting latency and other 5G
requirements.Comment: Accepted in IEEE Communications Surveys and Tutorial
- …