25,030 research outputs found
Hybrid Centralized-Distributed Resource Allocation for Device-to-Device Communication Underlaying Cellular Networks
The basic idea of device-to-device (D2D) communication is that pairs of
suitably selected wireless devices reuse the cellular spectrum to establish
direct communication links, provided that the adverse effects of D2D
communication on cellular users is minimized and cellular users are given a
higher priority in using limited wireless resources. Despite its great
potential in terms of coverage and capacity performance, implementing this new
concept poses some challenges, in particular with respect to radio resource
management. The main challenges arise from a strong need for distributed D2D
solutions that operate in the absence of precise channel and network knowledge.
In order to address this challenge, this paper studies a resource allocation
problem in a single-cell wireless network with multiple D2D users sharing the
available radio frequency channels with cellular users. We consider a realistic
scenario where the base station (BS) is provided with strictly limited channel
knowledge while D2D and cellular users have no information. We prove a
lower-bound for the cellular aggregate utility in the downlink with fixed BS
power, which allows for decoupling the channel allocation and D2D power control
problems. An efficient graph-theoretical approach is proposed to perform the
channel allocation, which offers flexibility with respect to allocation
criterion (aggregate utility maximization, fairness, quality of service
guarantee). We model the power control problem as a multi-agent learning game.
We show that the game is an exact potential game with noisy rewards, defined on
a discrete strategy set, and characterize the set of Nash equilibria.
Q-learning better-reply dynamics is then used to achieve equilibrium.Comment: 35 page
Follow Me at the Edge: Mobility-Aware Dynamic Service Placement for Mobile Edge Computing
Mobile edge computing is a new computing paradigm, which pushes cloud
computing capabilities away from the centralized cloud to the network edge.
However, with the sinking of computing capabilities, the new challenge incurred
by user mobility arises: since end-users typically move erratically, the
services should be dynamically migrated among multiple edges to maintain the
service performance, i.e., user-perceived latency. Tackling this problem is
non-trivial since frequent service migration would greatly increase the
operational cost. To address this challenge in terms of the performance-cost
trade-off, in this paper we study the mobile edge service performance
optimization problem under long-term cost budget constraint. To address user
mobility which is typically unpredictable, we apply Lyapunov optimization to
decompose the long-term optimization problem into a series of real-time
optimization problems which do not require a priori knowledge such as user
mobility. As the decomposed problem is NP-hard, we first design an
approximation algorithm based on Markov approximation to seek a near-optimal
solution. To make our solution scalable and amenable to future 5G application
scenario with large-scale user devices, we further propose a distributed
approximation scheme with greatly reduced time complexity, based on the
technique of best response update. Rigorous theoretical analysis and extensive
evaluations demonstrate the efficacy of the proposed centralized and
distributed schemes.Comment: The paper is accepted by IEEE Journal on Selected Areas in
Communications, Aug. 201
Application of Machine Learning in Wireless Networks: Key Techniques and Open Issues
As a key technique for enabling artificial intelligence, machine learning
(ML) is capable of solving complex problems without explicit programming.
Motivated by its successful applications to many practical tasks like image
recognition, both industry and the research community have advocated the
applications of ML in wireless communication. This paper comprehensively
surveys the recent advances of the applications of ML in wireless
communication, which are classified as: resource management in the MAC layer,
networking and mobility management in the network layer, and localization in
the application layer. The applications in resource management further include
power control, spectrum management, backhaul management, cache management,
beamformer design and computation resource management, while ML based
networking focuses on the applications in clustering, base station switching
control, user association and routing. Moreover, literatures in each aspect is
organized according to the adopted ML techniques. In addition, several
conditions for applying ML to wireless communication are identified to help
readers decide whether to use ML and which kind of ML techniques to use, and
traditional approaches are also summarized together with their performance
comparison with ML based approaches, based on which the motivations of surveyed
literatures to adopt ML are clarified. Given the extensiveness of the research
area, challenges and unresolved issues are presented to facilitate future
studies, where ML based network slicing, infrastructure update to support ML
based paradigms, open data sets and platforms for researchers, theoretical
guidance for ML implementation and so on are discussed.Comment: 34 pages,8 figure
Toward Intelligent Network Optimization in Wireless Networking: An Auto-learning Framework
In wireless communication systems (WCSs), the network optimization problems
(NOPs) play an important role in maximizing system performances by setting
appropriate network configurations. When dealing with NOPs by using
conventional optimization methodologies, there exist the following three
problems: human intervention, model invalid, and high computation complexity.
As such, in this article we propose an auto-learning framework (ALF) to achieve
intelligent and automatic network optimization by using machine learning (ML)
techniques. We review the basic concepts of ML techniques, and propose their
rudimentary employment models in WCSs, including automatic model construction,
experience replay, efficient trial-and-error, RL-driven gaming, complexity
reduction, and solution recommendation. We hope these proposals can provide new
insights and motivations in future researches for dealing with NOPs in WCSs by
using ML techniques.Comment: 8 pages, 5 figures, 1 table, magzine articl
Mobile Edge Computation Offloading Using Game Theory and Reinforcement Learning
Due to the ever-increasing popularity of resource-hungry and
delay-constrained mobile applications, the computation and storage capabilities
of remote cloud has partially migrated towards the mobile edge, giving rise to
the concept known as Mobile Edge Computing (MEC). While MEC servers enjoy the
close proximity to the end-users to provide services at reduced latency and
lower energy costs, they suffer from limitations in computational and radio
resources, which calls for fair efficient resource management in the MEC
servers. The problem is however challenging due to the ultra-high density,
distributed nature, and intrinsic randomness of next generation wireless
networks. In this article, we focus on the application of game theory and
reinforcement learning for efficient distributed resource management in MEC, in
particular, for computation offloading. We briefly review the cutting-edge
research and discuss future challenges. Furthermore, we develop a
game-theoretical model for energy-efficient distributed edge server activation
and study several learning techniques. Numerical results are provided to
illustrate the performance of these distributed learning techniques. Also, open
research issues in the context of resource management in MEC servers are
discussed
A Game-Theoretic Framework for Resource Sharing in Clouds
Providing resources to different users or applications is fundamental to
cloud computing. This is a challenging problem as a cloud service provider may
have insufficient resources to satisfy all user requests. Furthermore,
allocating available resources optimally to different applications is also
challenging. Resource sharing among different cloud service providers can
improve resource availability and resource utilization as certain cloud service
providers may have free resources available that can be ``rented'' by other
service providers. However, different cloud service providers can have
different objectives or \emph{utilities}. Therefore, there is a need for a
framework that can share and allocate resources in an efficient and effective
way, while taking into account the objectives of various service providers that
results in a \emph{multi-objective optimization} problem. In this paper, we
present a \emph{Cooperative Game Theory} (CGT) based framework for resource
sharing and allocation among different service providers with varying
objectives that form a coalition. We show that the resource sharing problem can
be modeled as an player \emph{canonical} cooperative game with
\emph{non-transferable utility} (NTU) and prove that the game is convex for
monotonic non-decreasing utilities. We propose an algorithm
that provides an allocation from the \emph{core}, hence guaranteeing
\emph{Pareto optimality}. We evaluate the performance of our proposed resource
sharing framework in a number of simulation settings and show that our proposed
framework improves user satisfaction and utility of service providers.Comment: The paper has been accepted for publication in IFIP WMNC 2019, Paris
Franc
Edge Intelligence: The Confluence of Edge Computing and Artificial Intelligence
Along with the rapid developments in communication technologies and the surge
in the use of mobile devices, a brand-new computation paradigm, Edge Computing,
is surging in popularity. Meanwhile, Artificial Intelligence (AI) applications
are thriving with the breakthroughs in deep learning and the many improvements
in hardware architectures. Billions of data bytes, generated at the network
edge, put massive demands on data processing and structural optimization. Thus,
there exists a strong demand to integrate Edge Computing and AI, which gives
birth to Edge Intelligence. In this paper, we divide Edge Intelligence into AI
for edge (Intelligence-enabled Edge Computing) and AI on edge (Artificial
Intelligence on Edge). The former focuses on providing more optimal solutions
to key problems in Edge Computing with the help of popular and effective AI
technologies while the latter studies how to carry out the entire process of
building AI models, i.e., model training and inference, on the edge. This paper
provides insights into this new inter-disciplinary field from a broader
perspective. It discusses the core concepts and the research road-map, which
should provide the necessary background for potential future research
initiatives in Edge Intelligence.Comment: 13 pages, 3 figure
Internet Resource Pricing Models, Mechanisms, and Methods
With the fast development of video and voice network applications, CDN
(Content Distribution Networks) and P2P (Peer-to-Peer) content distribution
technologies have gradually matured. How to effectively use Internet resources
thus has attracted more and more attentions. For the study of resource pricing,
a whole pricing strategy containing pricing models, mechanisms and methods
covers all the related topics. We first introduce three basic Internet resource
pricing models through an Internet cost analysis. Then, with the evolution of
service types, we introduce several corresponding mechanisms which can ensure
pricing implementation and resource allocation. On network resource pricing
methods, we discuss the utility optimization in economics, and emphasize two
classes of pricing methods (including system optimization and entities'
strategic optimizations). Finally, we conclude the paper and forecast the
research direction on pricing strategy which is applicable to novel service
situation in the near future.Comment: Submitted to Networking Science for peer revie
Adaptive Event Dispatching in Serverless Computing Infrastructures
Serverless computing is an emerging Cloud service model. It is currently
gaining momentum as the next step in the evolution of hosted computing from
capacitated machine virtualisation and microservices towards utility computing.
The term "serverless" has become a synonym for the entirely
resource-transparent deployment model of cloud-based event-driven distributed
applications. This work investigates how adaptive event dispatching can improve
serverless platform resource efficiency and contributes a novel approach that
allows for better scaling and fitting of the platform's resource consumption to
actual demand
A Generic Framework for Task Offloading in mmWave MEC Backhaul Networks
With the emergence of millimeter-Wave (mmWave) communication technology, the
capacity of mobile backhaul networks can be significantly increased. On the
other hand, Mobile Edge Computing (MEC) provides an appropriate infrastructure
to offload latency-sensitive tasks. However, the amount of resources in MEC
servers is typically limited. Therefore, it is important to intelligently
manage the MEC task offloading by optimizing the backhaul bandwidth and edge
server resource allocation in order to decrease the overall latency of the
offloaded tasks. This paper investigates the task allocation problem in MEC
environment, where the mmWave technology is used in the backhaul network. We
formulate a Mixed Integer NonLinear Programming (MINLP) problem with the goal
to minimize the total task serving time. Its objective is to determine an
optimized network topology, identify which server is used to process a given
offloaded task, find the path of each user task, and determine the allocated
bandwidth to each task on mmWave backhaul links. Because the problem is
difficult to solve, we develop a two-step approach. First, a Mixed Integer
Linear Program (MILP) determining the network topology and the routing paths is
optimally solved. Then, the fractions of bandwidth allocated to each user task
are optimized by solving a quasi-convex problem. Numerical results illustrate
the obtained topology and routing paths for selected scenarios and show that
optimizing the bandwidth allocation significantly improves the total serving
time, particularly for bandwidth-intensive tasks
- …