8,971 research outputs found
Massive MIMO for Internet of Things (IoT) Connectivity
Massive MIMO is considered to be one of the key technologies in the emerging
5G systems, but also a concept applicable to other wireless systems. Exploiting
the large number of degrees of freedom (DoFs) of massive MIMO essential for
achieving high spectral efficiency, high data rates and extreme spatial
multiplexing of densely distributed users. On the one hand, the benefits of
applying massive MIMO for broadband communication are well known and there has
been a large body of research on designing communication schemes to support
high rates. On the other hand, using massive MIMO for Internet-of-Things (IoT)
is still a developing topic, as IoT connectivity has requirements and
constraints that are significantly different from the broadband connections. In
this paper we investigate the applicability of massive MIMO to IoT
connectivity. Specifically, we treat the two generic types of IoT connections
envisioned in 5G: massive machine-type communication (mMTC) and ultra-reliable
low-latency communication (URLLC). This paper fills this important gap by
identifying the opportunities and challenges in exploiting massive MIMO for IoT
connectivity. We provide insights into the trade-offs that emerge when massive
MIMO is applied to mMTC or URLLC and present a number of suitable communication
schemes. The discussion continues to the questions of network slicing of the
wireless resources and the use of massive MIMO to simultaneously support IoT
connections with very heterogeneous requirements. The main conclusion is that
massive MIMO can bring benefits to the scenarios with IoT connectivity, but it
requires tight integration of the physical-layer techniques with the protocol
design.Comment: Submitted for publicatio
JANUS: an FPGA-based System for High Performance Scientific Computing
This paper describes JANUS, a modular massively parallel and reconfigurable
FPGA-based computing system. Each JANUS module has a computational core and a
host. The computational core is a 4x4 array of FPGA-based processing elements
with nearest-neighbor data links. Processors are also directly connected to an
I/O node attached to the JANUS host, a conventional PC. JANUS is tailored for,
but not limited to, the requirements of a class of hard scientific applications
characterized by regular code structure, unconventional data manipulation
instructions and not too large data-base size. We discuss the architecture of
this configurable machine, and focus on its use on Monte Carlo simulations of
statistical mechanics. On this class of application JANUS achieves impressive
performances: in some cases one JANUS processing element outperfoms high-end
PCs by a factor ~ 1000. We also discuss the role of JANUS on other classes of
scientific applications.Comment: 11 pages, 6 figures. Improved version, largely rewritten, submitted
to Computing in Science & Engineerin
Massive MIMO is a Reality -- What is Next? Five Promising Research Directions for Antenna Arrays
Massive MIMO (multiple-input multiple-output) is no longer a "wild" or
"promising" concept for future cellular networks - in 2018 it became a reality.
Base stations (BSs) with 64 fully digital transceiver chains were commercially
deployed in several countries, the key ingredients of Massive MIMO have made it
into the 5G standard, the signal processing methods required to achieve
unprecedented spectral efficiency have been developed, and the limitation due
to pilot contamination has been resolved. Even the development of fully digital
Massive MIMO arrays for mmWave frequencies - once viewed prohibitively
complicated and costly - is well underway. In a few years, Massive MIMO with
fully digital transceivers will be a mainstream feature at both sub-6 GHz and
mmWave frequencies. In this paper, we explain how the first chapter of the
Massive MIMO research saga has come to an end, while the story has just begun.
The coming wide-scale deployment of BSs with massive antenna arrays opens the
door to a brand new world where spatial processing capabilities are
omnipresent. In addition to mobile broadband services, the antennas can be used
for other communication applications, such as low-power machine-type or
ultra-reliable communications, as well as non-communication applications such
as radar, sensing and positioning. We outline five new Massive MIMO related
research directions: Extremely large aperture arrays, Holographic Massive MIMO,
Six-dimensional positioning, Large-scale MIMO radar, and Intelligent Massive
MIMO.Comment: 20 pages, 9 figures, submitted to Digital Signal Processin
TrIMS: Transparent and Isolated Model Sharing for Low Latency Deep LearningInference in Function as a Service Environments
Deep neural networks (DNNs) have become core computation components within
low latency Function as a Service (FaaS) prediction pipelines: including image
recognition, object detection, natural language processing, speech synthesis,
and personalized recommendation pipelines. Cloud computing, as the de-facto
backbone of modern computing infrastructure for both enterprise and consumer
applications, has to be able to handle user-defined pipelines of diverse DNN
inference workloads while maintaining isolation and latency guarantees, and
minimizing resource waste. The current solution for guaranteeing isolation
within FaaS is suboptimal -- suffering from "cold start" latency. A major cause
of such inefficiency is the need to move large amount of model data within and
across servers. We propose TrIMS as a novel solution to address these issues.
Our proposed solution consists of a persistent model store across the GPU, CPU,
local storage, and cloud storage hierarchy, an efficient resource management
layer that provides isolation, and a succinct set of application APIs and
container technologies for easy and transparent integration with FaaS, Deep
Learning (DL) frameworks, and user code. We demonstrate our solution by
interfacing TrIMS with the Apache MXNet framework and demonstrate up to 24x
speedup in latency for image classification models and up to 210x speedup for
large models. We achieve up to 8x system throughput improvement.Comment: In Proceedings CLOUD 201
Distributed Implementation of eXtended Reality Technologies over 5G Networks
MenciĂłn Internacional en el tĂtulo de doctorThe revolution of Extended Reality (XR) has already started and is rapidly
expanding as technology advances. Announcements such as Meta’s Metaverse have
boosted the general interest in XR technologies, producing novel use cases. With
the advent of the fifth generation of cellular networks (5G), XR technologies are
expected to improve significantly by offloading heavy computational processes from
the XR Head Mounted Display (HMD) to an edge server. XR offloading can rapidly
boost XR technologies by considerably reducing the burden on the XR hardware,
while improving the overall user experience by enabling smoother graphics and more
realistic interactions. Overall, the combination of XR and 5G has the potential to
revolutionize the way we interact with technology and experience the world around
us.
However, XR offloading is a complex task that requires state-of-the-art tools
and solutions, as well as an advanced wireless network that can meet the demanding
throughput, latency, and reliability requirements of XR. The definition of these
requirements strongly depends on the use case and particular XR offloading implementations.
Therefore, it is crucial to perform a thorough Key Performance
Indicators (KPIs) analysis to ensure a successful design of any XR offloading solution.
Additionally, distributed XR implementations can be intrincated systems with
multiple processes running on different devices or virtual instances. All these agents
must be well-handled and synchronized to achieve XR real-time requirements and
ensure the expected user experience, guaranteeing a low processing overhead. XR
offloading requires a carefully designed architecture which complies with the required
KPIs while efficiently synchronizing and handling multiple heterogeneous devices.
Offloading XR has become an essential use case for 5G and beyond 5G technologies.
However, testing distributed XR implementations requires access to advanced
5G deployments that are often unavailable to most XR application developers. Conversely,
the development of 5G technologies requires constant feedback from potential
applications and use cases. Unfortunately, most 5G providers, engineers, or
researchers lack access to cutting-edge XR hardware or applications, which can hinder
the fast implementation and improvement of 5G’s most advanced features. Both
technology fields require ongoing input and continuous development from each other
to fully realize their potential. As a result, XR and 5G researchers and developers
must have access to the necessary tools and knowledge to ensure the rapid and
satisfactory development of both technology fields.
In this thesis, we focus on these challenges providing knowledge, tools and solutiond towards the implementation of advanced offloading technologies, opening the
door to more immersive, comfortable and accessible XR technologies. Our contributions
to the field of XR offloading include a detailed study and description of the
necessary network throughput and latency KPIs for XR offloading, an architecture
for low latency XR offloading and our full end to end XR offloading implementation
ready for a commercial XR HMD. Besides, we also present a set of tools which can
facilitate the joint development of 5G networks and XR offloading technologies: our
5G RAN real-time emulator and a multi-scenario XR IP traffic dataset.
Firstly, in this thesis, we thoroughly examine and explain the KPIs that are
required to achieve the expected Quality of Experience (QoE) and enhanced immersiveness
in XR offloading solutions. Our analysis focuses on individual XR
algorithms, rather than potential use cases. Additionally, we provide an initial
description of feasible 5G deployments that could fulfill some of the proposed KPIs
for different offloading scenarios.
We also present our low latency muti-modal XR offloading architecture, which
has already been tested on a commercial XR device and advanced 5G deployments,
such as millimeter-wave (mmW) technologies. Besides, we describe our full endto-
end complex XR offloading system which relies on our offloading architecture to
provide low latency communication between a commercial XR device and a server
running a Machine Learning (ML) algorithm. To the best of our knowledge, this is
one of the first successful XR offloading implementations for complex ML algorithms
in a commercial device.
With the goal of providing XR developers and researchers access to complex
5G deployments and accelerating the development of future XR technologies, we
present FikoRE, our 5G RAN real-time emulator. FikoRE has been specifically
designed not only to model the network with sufficient accuracy but also to support
the emulation of a massive number of users and actual IP throughput. As FikoRE
can handle actual IP traffic above 1 Gbps, it can directly be used to test distributed
XR solutions. As we describe in the thesis, its emulation capabilities make FikoRE
a potential candidate to become a reference testbed for distributed XR developers
and researchers.
Finally, we used our XR offloading tools to generate an XR IP traffic dataset
which can accelerate the development of 5G technologies by providing a straightforward
manner for testing novel 5G solutions using realistic XR data. This dataset is
generated for two relevant XR offloading scenarios: split rendering, in which the rendering
step is moved to an edge server, and heavy ML algorithm offloading. Besides,
we derive the corresponding IP traffic models from the captured data, which can be
used to generate realistic XR IP traffic. We also present the validation experiments
performed on the derived models and their results.This work has received funding from the European Union (EU) Horizon 2020 research and innovation programme under the Marie SkĹ‚odowska-Curie ETN TeamUp5G, grant agreement No. 813391.Programa de Doctorado en Multimedia y Comunicaciones por la Universidad Carlos III de Madrid y la Universidad Rey Juan CarlosPresidente: Narciso GarcĂa Santos.- Secretario: Fernando DĂaz de MarĂa.- Vocal: Aryan Kaushi
A Very Brief Introduction to Machine Learning With Applications to Communication Systems
Given the unprecedented availability of data and computing resources, there
is widespread renewed interest in applying data-driven machine learning methods
to problems for which the development of conventional engineering solutions is
challenged by modelling or algorithmic deficiencies. This tutorial-style paper
starts by addressing the questions of why and when such techniques can be
useful. It then provides a high-level introduction to the basics of supervised
and unsupervised learning. For both supervised and unsupervised learning,
exemplifying applications to communication networks are discussed by
distinguishing tasks carried out at the edge and at the cloud segments of the
network at different layers of the protocol stack
- …