29 research outputs found
Remote Control of a Robot Rover Combining 5G, AI, and GPU Image Processing at the Edge
This paper has been presented at 2020 Optical Fiber Communications Conference and Exhibition (OFC)The demo shows the effectiveness of a low latency remote control based on 5G and
image processing at the edge exploiting artificial intelligence and GPUs to make a robot rover slalom between posts.This work has been partially supported by TIM under the Cooperation Agreement with Scuola Superiore Sant’Anna for the 5G MISE Trial in Bari and Matera 2018-2022 and the EU Commission through the 5GROWTH project (grant agreement no. 856709)
From 5G to 6G: Revolutionizing Satellite Networks through TRANTOR Foundation
5G technology will drastically change the way satellite internet providers
deliver services by offering higher data speeds, massive network capacity,
reduced latency, improved reliability and increased availability. A
standardised 5G ecosystem will enable adapting 5G to satellite needs. The
EU-funded TRANTOR project will seek to develop novel and secure satellite
network management solutions that allow scaling up heterogeneous satellite
traffic demands and capacities in a cost-effective and highly dynamic way.
Researchers also target the development of flexible 6G non-terrestrial access
architectures. The focus will be on the design of a multi-orbit and multi-band
antenna for satellite user equipment (UE), as well as the development of gNodeB
(gNB) and UE 5G non-terrestrial network equipment to support
multi-connectivity
Hierarchical Network Data Analytics Framework for B5G Network Automation: Design and Implementation
5G introduced modularized network functions (NFs) to support emerging
services in a more flexible and elastic manner. To mitigate the complexity in
such modularized NF management, automated network operation and management are
indispensable, and thus the 3rd generation partnership project (3GPP) has
introduced a network data analytics function (NWDAF). However, a conventional
NWDAF needs to conduct both inference and training tasks, and thus it is
difficult to provide the analytics results to NFs in a timely manner for an
increased number of analytics requests. In this article, we propose a
hierarchical network data analytics framework (H-NDAF) where inference tasks
are distributed to multiple leaf NWDAFs and training tasks are conducted at the
root NWDAF. Extensive simulation results using open-source software (i.e.,
free5GC) demonstrate that H-NDAF can provide sufficiently accurate analytics
and faster analytics provision time compared to the conventional NWDAF.Comment: 7 page
Near Real-Time Distributed State Estimation via AI/ML-Empowered 5G Networks
Fifth-Generation (5G) networks have a potential to accelerate power system
transition to a flexible, softwarized, data-driven, and intelligent grid. With
their evolving support for Machine Learning (ML)/Artificial Intelligence (AI)
functions, 5G networks are expected to enable novel data-centric Smart Grid
(SG) services. In this paper, we explore how data-driven SG services could be
integrated with ML/AI-enabled 5G networks in a symbiotic relationship. We focus
on the State Estimation (SE) function as a key element of the energy management
system and focus on two main questions. Firstly, in a tutorial fashion, we
present an overview on how distributed SE can be integrated with the elements
of the 5G core network and radio access network architecture. Secondly, we
present and compare two powerful distributed SE methods based on: i) graphical
models and belief propagation, and ii) graph neural networks. We discuss their
performance and capability to support a near real-time distributed SE via 5G
network, taking into account communication delays
5G Radio Access above 6 GHz
Designing and developing a millimetre-wave(mmWave) based mobile Radio Access
Technology (RAT) in the 6-100 GHz frequency range is a fundamental component in
the standardization of the new 5G radio interface, recently kicked off by 3GPP.
Such component, herein called the new mmWave RAT, will not only enable extreme
mobile broadband (eMBB) services,but also support UHD/3D streaming, offer
immersive applications and ultra-responsive cloud services to provide an
outstanding Quality of Experience (QoE) to the mobile users. The main objective
of this paper is to develop the network architectural elements and functions
that will enable tight integration of mmWave technology into the overall 5G
radio access network (RAN). A broad range of topics addressing mobile
architecture and network functionalities will be covered-starting with the
architectural facets of network slicing, multiconnectivity and cells
clustering, to more functional elements of initial access, mobility, radio
resource management (RRM) and self-backhauling. The intention of the concepts
presented here is to lay foundation for future studies towards the first
commercial implementation of the mmWave RAT above 6 GHz.Comment: 7 pages, 5 figure
Mining tourists’ movement patterns in a city
Although tourists generate a large amount of data (known as “big data”) when they visit cities, little is known about their spatial behavior. One of the most significant issues that has recently gained attention is mobile phone usage and user behavior tracking. A spatial and temporal data visualization approach was established with the purpose of finding tourists’ footprints. This work provides a platform for combining multiple data sources into one and transforming information into knowledge. Using Python, we created a method to build visualization dashboards aiming to provide insights about tourists’ movements and concentrations in a city using information from mobile operators. This approach can be replicated to other smart cities with data available. Weather and major events, for instance, have an impact on the movements of tourists. The outputs from this work provide useful information for tourism professionals to understand tourists’ preferences and improve the visitors’ experience. Management authorities may also use these outputs to increase security based on tourists’ concentration and movements. A case study in Lisbon with 4 months data is presented, but the proposed approach can also be used in other cities based on data availability. Results from this case study demonstrate how tourists tend to gather around a set of parishes during a specific time of the day during the months under study, as well as how unusual circumstances, namely international events, impact their overall spatial behavior.This work was supported by EEA Grants Blue Growth Programme (Call #5). Project PT-INNOVATION-0069 – Fish2Fork
Leveraging the edge and cloud for V2X-based real-time object detection in autonomous driving
peer reviewedEnvironmental perception is a key element of autonomous driving because the information received from the perception module influences core driving decisions. An outstanding challenge in real-time perception for autonomous driving lies in finding the best trade-off between detection quality and latency. Major constraints on both computation and power must be taken into account for real-time perception in autonomous vehicles. Larger detection models tend to produce the best results but are also slower at runtime. Since the most accurate detectors may not run in real-time locally, we investigate the possibility of offloading computation to edge and cloud platforms, which are less resource-constrained. We create a synthetic dataset to train object detection models and evaluate different offloading strategies. We measure inference and processing times for object detection on real hardware, and we rely on a network simulation framework to estimate data transmission latency. Our study compares different trade-offs between prediction quality and end-to-end delay. Following the existing literature, we aim to perform object detection at a rate of 20Hz. Since sending raw frames over the network implies additional transmission delays, we also explore the use of JPEG and H.265 compression at varying qualities and measure their impact on prediction. We show that models with adequate compression can be run in real-time on the edge/cloud while outperforming local detection performance