30,909 research outputs found
A Satisfactory Power Control for 5G Self-Organizing Networks
SmallCells are deployed in order to enhance the network performance by
bringing the network closer to the user. However, as the number of low power
nodes grows increasingly, the overall energy consumption of the SmallCells base
stations cannot be ignored. A relevant amount of energy could be saved through
several techniques, especially power control mechanisms. In this paper, we are
concerned with energy aware self organizing networks that guarantee a
satisfactory performance. We consider satisfaction equilibria, mainly the
efficient satisfaction equilibrium (ESE), to ensure a target quality of service
(QoS) and save energy. First, we identify conditions of existence and
uniqueness of ESE under a stationary channel assumption. We fully characterize
the ESE and prove that, whenever it exists, it is a solution of a linear
system. Moreover, we define satisfactory Pareto optimality and show that, at
the ESE, no player can increase its QoS without degrading the overall
performance. Under a fast fading channel assumption, as the robust satisfaction
equilibrium solution is very restrictive, we propose an alternative solution
namely the long term satisfaction equilibrium, and describe how to reach this
solution efficiently. Finally, in order to find satisfactory solution per all
users, we propose fully distributed strategic learning schemes based on
Banach-Picard, Mann and Bush Mosteller algorithms, and show through simulations
their qualitative properties. fully distributed strategic learning schemes
based on Banach Picard, Mann and Bush Mosteller algorithms, and show through
simulations their qualitative properties
Machine Learning for Heterogeneous Ultra-Dense Networks with Graphical Representations
Heterogeneous ultra-dense network (H-UDN) is envisioned as a promising
solution to sustain the explosive mobile traffic demand through network
densification. By placing access points, processors, and storage units as close
as possible to mobile users, H-UDNs bring forth a number of advantages,
including high spectral efficiency, high energy efficiency, and low latency.
Nonetheless, the high density and diversity of network entities in H-UDNs
introduce formidable design challenges in collaborative signal processing and
resource management. This article illustrates the great potential of machine
learning techniques in solving these challenges. In particular, we show how to
utilize graphical representations of H-UDNs to design efficient machine
learning algorithms
Application of Machine Learning in Wireless Networks: Key Techniques and Open Issues
As a key technique for enabling artificial intelligence, machine learning
(ML) is capable of solving complex problems without explicit programming.
Motivated by its successful applications to many practical tasks like image
recognition, both industry and the research community have advocated the
applications of ML in wireless communication. This paper comprehensively
surveys the recent advances of the applications of ML in wireless
communication, which are classified as: resource management in the MAC layer,
networking and mobility management in the network layer, and localization in
the application layer. The applications in resource management further include
power control, spectrum management, backhaul management, cache management,
beamformer design and computation resource management, while ML based
networking focuses on the applications in clustering, base station switching
control, user association and routing. Moreover, literatures in each aspect is
organized according to the adopted ML techniques. In addition, several
conditions for applying ML to wireless communication are identified to help
readers decide whether to use ML and which kind of ML techniques to use, and
traditional approaches are also summarized together with their performance
comparison with ML based approaches, based on which the motivations of surveyed
literatures to adopt ML are clarified. Given the extensiveness of the research
area, challenges and unresolved issues are presented to facilitate future
studies, where ML based network slicing, infrastructure update to support ML
based paradigms, open data sets and platforms for researchers, theoretical
guidance for ML implementation and so on are discussed.Comment: 34 pages,8 figure
Recent Advances in Cloud Radio Access Networks: System Architectures, Key Techniques, and Open Issues
As a promising paradigm to reduce both capital and operating expenditures,
the cloud radio access network (C-RAN) has been shown to provide high spectral
efficiency and energy efficiency. Motivated by its significant theoretical
performance gains and potential advantages, C-RANs have been advocated by both
the industry and research community. This paper comprehensively surveys the
recent advances of C-RANs, including system architectures, key techniques, and
open issues. The system architectures with different functional splits and the
corresponding characteristics are comprehensively summarized and discussed. The
state-of-the-art key techniques in C-RANs are classified as: the fronthaul
compression, large-scale collaborative processing, and channel estimation in
the physical layer; and the radio resource allocation and optimization in the
upper layer. Additionally, given the extensiveness of the research area, open
issues and challenges are presented to spur future investigations, in which the
involvement of edge cache, big data mining, social-aware device-to-device,
cognitive radio, software defined network, and physical layer security for
C-RANs are discussed, and the progress of testbed development and trial test
are introduced as well.Comment: 27 pages, 11 figure
A Bi-layered Parallel Training Architecture for Large-scale Convolutional Neural Networks
Benefitting from large-scale training datasets and the complex training
network, Convolutional Neural Networks (CNNs) are widely applied in various
fields with high accuracy. However, the training process of CNNs is very
time-consuming, where large amounts of training samples and iterative
operations are required to obtain high-quality weight parameters. In this
paper, we focus on the time-consuming training process of large-scale CNNs and
propose a Bi-layered Parallel Training (BPT-CNN) architecture in distributed
computing environments. BPT-CNN consists of two main components: (a) an
outer-layer parallel training for multiple CNN subnetworks on separate data
subsets, and (b) an inner-layer parallel training for each subnetwork. In the
outer-layer parallelism, we address critical issues of distributed and parallel
computing, including data communication, synchronization, and workload balance.
A heterogeneous-aware Incremental Data Partitioning and Allocation (IDPA)
strategy is proposed, where large-scale training datasets are partitioned and
allocated to the computing nodes in batches according to their computing power.
To minimize the synchronization waiting during the global weight update
process, an Asynchronous Global Weight Update (AGWU) strategy is proposed. In
the inner-layer parallelism, we further accelerate the training process for
each CNN subnetwork on each computer, where computation steps of convolutional
layer and the local weight training are parallelized based on task-parallelism.
We introduce task decomposition and scheduling strategies with the objectives
of thread-level load balancing and minimum waiting time for critical paths.
Extensive experimental results indicate that the proposed BPT-CNN effectively
improves the training performance of CNNs while maintaining the accuracy
Intelligent Wireless Communications Enabled by Cognitive Radio and Machine Learning
The ability to intelligently utilize resources to meet the need of growing
diversity in services and user behavior marks the future of wireless
communication systems. Intelligent wireless communications aims at enabling the
system to perceive and assess the available resources, to autonomously learn to
adapt to the perceived wireless environment, and to reconfigure its operating
mode to maximize the utility of the available resources. The perception
capability and reconfigurability are the essential features of cognitive radio
while modern machine learning techniques project great potential in system
adaptation. In this paper, we discuss the development of the cognitive radio
technology and machine learning techniques and emphasize their roles in
improving spectrum and energy utility of wireless communication systems. We
describe the state-of-the-art of relevant techniques, covering spectrum sensing
and access approaches and powerful machine learning algorithms that enable
spectrum- and energy-efficient communications in dynamic wireless environments.
We also present practical applications of these techniques and identify further
research challenges in cognitive radio and machine learning as applied to the
existing and future wireless communication systems
Deep Learning Based Power Control for Quality-Driven Wireless Video Transmissions
In this paper, wireless video transmission to multiple users under total
transmission power and minimum required video quality constraints is studied.
In order to provide the desired performance levels to the end-users in
real-time video transmissions while using the energy resources efficiently, we
assume that power control is employed. Due to the presence of interference,
determining the optimal power control is a non-convex problem but can be solved
via monotonic optimization framework. However, monotonic optimization is an
iterative algorithm and can often entail considerable computational complexity,
making it not suitable for real-time applications. To address this, we propose
a learning-based approach that treats the input and output of a resource
allocation algorithm as an unknown nonlinear mapping and a deep neural network
(DNN) is employed to learn this mapping. This learned mapping via DNN can
provide the optimal power level quickly for given channel conditions.Comment: arXiv admin note: text overlap with arXiv:1707.0823
A Game Theoretic Perspective on Self-organizing Optimization for Cognitive Small Cells
In this article, we investigate self-organizing optimization for cognitive
small cells (CSCs), which have the ability to sense the environment, learn from
historical information, make intelligent decisions, and adjust their
operational parameters. By exploring the inherent features, some fundamental
challenges for self-organizing optimization in CSCs are presented and
discussed. Specifically, the dense and random deployment of CSCs brings about
some new challenges in terms of scalability and adaptation; furthermore, the
uncertain, dynamic and incomplete information constraints also impose some new
challenges in terms of convergence and robustness. For providing better service
to the users and improving the resource utilization, four requirements for
self-organizing optimization in CSCs are presented and discussed. Following the
attractive fact that the decisions in game-theoretic models are exactly
coincident with those in self-organizing optimization, i.e., distributed and
autonomous, we establish a framework of game-theoretic solutions for
self-organizing optimization in CSCs, and propose some featured game models.
Specifically, their basic models are presented, some examples are discussed and
future research directions are given.Comment: 8 Pages, 8 Figures, to appear in IEEE Communications Magazin
Adaptive Task Allocation for Mobile Edge Learning
This paper aims to establish a new optimization paradigm for implementing
realistic distributed learning algorithms, with performance guarantees, on
wireless edge nodes with heterogeneous computing and communication capacities.
We will refer to this new paradigm as `Mobile Edge Learning (MEL)'. The problem
of dynamic task allocation for MEL is considered in this paper with the aim to
maximize the learning accuracy, while guaranteeing that the total times of data
distribution/aggregation over heterogeneous channels, and local computing
iterations at the heterogeneous nodes, are bounded by a preset duration. The
problem is first formulated as a quadratically-constrained integer linear
problem. Being an NP-hard problem, the paper relaxes it into a non-convex
problem over real variables. We thus proposed two solutions based on deriving
analytical upper bounds of the optimal solution of this relaxed problem using
Lagrangian analysis and KKT conditions, and the use of suggest-and-improve
starting from equal batch allocation, respectively. The merits of these
proposed solutions are exhibited by comparing their performances to both
numerical approaches and the equal task allocation approach.Comment: 8 pages, 2 figures, submitted to IEEE WCNC Workshop 2019, Morocc
Asynchronous Incremental Stochastic Dual Descent Algorithm for Network Resource Allocation
Stochastic network optimization problems entail finding resource allocation
policies that are optimum on an average but must be designed in an online
fashion. Such problems are ubiquitous in communication networks, where
resources such as energy and bandwidth are divided among nodes to satisfy
certain long-term objectives. This paper proposes an asynchronous incremental
dual decent resource allocation algorithm that utilizes delayed stochastic
{gradients} for carrying out its updates. The proposed algorithm is well-suited
to heterogeneous networks as it allows the computationally-challenged or
energy-starved nodes to, at times, postpone the updates. The asymptotic
analysis of the proposed algorithm is carried out, establishing dual
convergence under both, constant and diminishing step sizes. It is also shown
that with constant step size, the proposed resource allocation policy is
asymptotically near-optimal. An application involving multi-cell coordinated
beamforming is detailed, demonstrating the usefulness of the proposed
algorithm
- …