90 research outputs found
Smart Grid Communications: Overview of Research Challenges, Solutions, and Standardization Activities
Optimization of energy consumption in future intelligent energy networks (or
Smart Grids) will be based on grid-integrated near-real-time communications
between various grid elements in generation, transmission, distribution and
loads. This paper discusses some of the challenges and opportunities of
communications research in the areas of smart grid and smart metering. In
particular, we focus on some of the key communications challenges for realizing
interoperable and future-proof smart grid/metering networks, smart grid
security and privacy, and how some of the existing networking technologies can
be applied to energy management. Finally, we also discuss the coordinated
standardization efforts in Europe to harmonize communications standards and
protocols.Comment: To be published in IEEE Communications Surveys and Tutorial
Joint energy and rate allocation for successive interference cancellation in the finite blocklength regime
© 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.This work addresses the optimization of the network spectral efficiency (SE) under successive interference cancellation (SIC) at a given blocklength n. We adopt a proof-of-concept satellite scenario where network users can vary their transmission power and select their transmission rate from a set of encoders, for which decoding is characterized by a known packet error rate (PER) function. In the large-system limit, we apply variational calculus (VC) to obtain the user-energy distribution, the assigned per-user rate and the SIC decoding order maximizing the network SE under a sum-power constraint at the SIC input. We analyze two encoder sets: (i) an infinite set of encoders achieving information-theoretic finite blocklength PER results over a continuum of code rates, where the large-n second order expansion of the maximal channel coding rate is used; (ii) a feasible finite set of encoders. Simulations quantify the
performance gap between the two schemes.Peer ReviewedPostprint (author's final draft
Channel-aware energy allocation for throughput maximization in massive low-rate multiple access
© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.A multiple access (MA) optimization technique for massive low-rate direct-sequence spread spectrum communications is analyzed in this work. A dense network of users transmitting at the same rate to a common central node under channelaware energy allocation is evaluated. At reception, successive interference cancellation (SIC) aided by channel decoding is adopted. Our contribution focuses on wireless scenarios involving a vast number of users for which the provided user-asymptotic model holds. Variational calculus (VC) is employed to derive the energy allocation function that, via user-power imbalance, maximizes the network spectral efficiency (SE) when perfect channel state information at transmission (CSIT) is available and both average and maximum per-user energy constraints are set.
Monte Carlo simulations at chip-level of a SIC receiver using a real decoder assess the proposed optimization method.Peer ReviewedPostprint (published version
Performance Modeling of Softwarized Network Services Based on Queuing Theory with Experimental Validation
Network Functions Virtualization facilitates the automation of the scaling of softwarized network services (SNSs).
However, the realization of such a scenario requires a way to
determine the needed amount of resources so that the SNSs performance requisites are met for a given workload. This problem is
known as resource dimensioning, and it can be efficiently tackled
by performance modeling. In this vein, this paper describes an
analytical model based on an open queuing network of G/G/m
queues to evaluate the response time of SNSs. We validate our
model experimentally for a virtualized Mobility Management
Entity (vMME) with a three-tiered architecture running on
a testbed that resembles a typical data center virtualization
environment. We detail the description of our experimental
setup and procedures. We solve our resulting queueing network
by using the Queueing Networks Analyzer (QNA), Jackson’s
networks, and Mean Value Analysis methodologies, and compare
them in terms of estimation error. Results show that, for medium
and high workloads, the QNA method achieves less than half of
error compared to the standard techniques. For low workloads,
the three methods produce an error lower than 10%. Finally,
we show the usefulness of the model for performing the dynamic
provisioning of the vMME experimentally.This work has been partially funded by the H2020 research
and innovation project 5G-CLARITY (Grant No. 871428)National research
project 5G-City: TEC2016-76795-C6-4-RSpanish Ministry of
Education, Culture and Sport (FPU Grant 13/04833). We would also like to
thank the reviewers for their valuable feedback to enhance the quality
and contribution of this wor
Towards Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions
The ever-increasing number of resource-constrained Machine-Type Communication
(MTC) devices is leading to the critical challenge of fulfilling diverse
communication requirements in dynamic and ultra-dense wireless environments.
Among different application scenarios that the upcoming 5G and beyond cellular
networks are expected to support, such as eMBB, mMTC and URLLC, mMTC brings the
unique technical challenge of supporting a huge number of MTC devices, which is
the main focus of this paper. The related challenges include QoS provisioning,
handling highly dynamic and sporadic MTC traffic, huge signalling overhead and
Radio Access Network (RAN) congestion. In this regard, this paper aims to
identify and analyze the involved technical issues, to review recent advances,
to highlight potential solutions and to propose new research directions. First,
starting with an overview of mMTC features and QoS provisioning issues, we
present the key enablers for mMTC in cellular networks. Along with the
highlights on the inefficiency of the legacy Random Access (RA) procedure in
the mMTC scenario, we then present the key features and channel access
mechanisms in the emerging cellular IoT standards, namely, LTE-M and NB-IoT.
Subsequently, we present a framework for the performance analysis of
transmission scheduling with the QoS support along with the issues involved in
short data packet transmission. Next, we provide a detailed overview of the
existing and emerging solutions towards addressing RAN congestion problem, and
then identify potential advantages, challenges and use cases for the
applications of emerging Machine Learning (ML) techniques in ultra-dense
cellular networks. Out of several ML techniques, we focus on the application of
low-complexity Q-learning approach in the mMTC scenarios. Finally, we discuss
some open research challenges and promising future research directions.Comment: 37 pages, 8 figures, 7 tables, submitted for a possible future
publication in IEEE Communications Surveys and Tutorial
A survey of machine learning techniques applied to self organizing cellular networks
In this paper, a survey of the literature of the past fifteen years involving Machine Learning (ML) algorithms applied to self organizing cellular networks is performed. In order for future networks to overcome the current limitations and address the issues of current cellular systems, it is clear that more intelligence needs to be deployed, so that a fully autonomous and flexible network can be enabled. This paper focuses on the learning perspective of Self Organizing Networks (SON) solutions and provides, not only an overview of the most common ML techniques encountered in cellular networks, but also manages to classify each paper in terms of its learning solution, while also giving some examples. The authors also classify each paper in terms of its self-organizing use-case and discuss how each proposed solution performed. In addition, a comparison between the most commonly found ML algorithms in terms of certain SON metrics is performed and general guidelines on when to choose each ML algorithm for each SON function are proposed. Lastly, this work also provides future research directions and new paradigms that the use of more robust and intelligent algorithms, together with data gathered by operators, can bring to the cellular networks domain and fully enable the concept of SON in the near future
- …