5 research outputs found

    Usage of Network Simulators in Machine-Learning-Assisted 5G/6G Networks

    Full text link
    Without any doubt, Machine Learning (ML) will be an important driver of future communications due to its foreseen performance when applied to complex problems. However, the application of ML to networking systems raises concerns among network operators and other stakeholders, especially regarding trustworthiness and reliability. In this paper, we devise the role of network simulators for bridging the gap between ML and communications systems. In particular, we present an architectural integration of simulators in ML-aware networks for training, testing, and validating ML models before being applied to the operative network. Moreover, we provide insights on the main challenges resulting from this integration, and then give hints discussing how they can be overcome. Finally, we illustrate the integration of network simulators into ML-assisted communications through a proof-of-concept testbed implementation of a residential Wi-Fi network

    6G Radio Testbeds: Requirements, Trends, and Approaches

    Full text link
    The proof of the pudding is in the eating - that is why 6G testbeds are essential in the progress towards the next generation of wireless networks. Theoretical research towards 6G wireless networks is proposing advanced technologies to serve new applications and drastically improve the energy performance of the network. Testbeds are indispensable to validate these new technologies under more realistic conditions. This paper clarifies the requirements for 6G radio testbeds, reveals trends, and introduces approaches towards their development

    Recent Advances in Machine Learning for Network Automation in the O-RAN

    Get PDF
    © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY), https://creativecommons.org/licenses/by/4.0/The evolution of network technologies has witnessed a paradigm shift toward open and intelligent networks, with the Open Radio Access Network (O-RAN) architecture emerging as a promising solution. O-RAN introduces disaggregation and virtualization, enabling network operators to deploy multi-vendor and interoperable solutions. However, managing and automating the complex O-RAN ecosystem presents numerous challenges. To address this, machine learning (ML) techniques have gained considerable attention in recent years, offering promising avenues for network automation in O-RAN. This paper presents a comprehensive survey of the current research efforts on network automation using ML in O-RAN. We begin by providing an overview of the O-RAN architecture and its key components, highlighting the need for automation. Subsequently, we delve into O-RAN support for ML techniques. The survey then explores challenges in network automation using ML within the O-RAN environment, followed by the existing research studies discussing application of ML algorithms and frameworks for network automation in O-RAN. The survey further discusses the research opportunities by identifying important aspects where ML techniques can benefit.Peer reviewe

    Performance Assessment of an ITU-T Compliant Machine Learning Enhancements for 5 G RAN Network Slicing

    Get PDF
    Network slicing is a technique introduced by 3GPP to enable multi-tenant operation in 5 G systems. However, the support of slicing at the air interface requires not only efficient optimization algorithms operating in real time but also its tight integration into the 5 G control plane. In this paper, we first present a priority-based mechanism enabling defined performance isolation among slices competing for resources. Then, to speed up the resource arbitration process, we propose and compare several supervised machine learning (ML) techniques. We show how to embed the proposed approach into the ITU-T standardized ML architecture. The proposed ML enhancement is evaluated under realistic traffic conditions with respect to the performance criteria defined by GSMA while explicitly accounting for 5 G millimeter wave channel conditions. Our results show that ML techniques are able to provide suitable approximations for the resource allocation process ensuring slice performance isolation, efficient resource use, and fairness. Among the considered algorithms, polynomial regressions show the best results outperforming the exact solution algorithm by 5–6 orders of magnitude in terms of execution time and both neural network and random forest algorithms in terms of accuracy (by 20–40 %), sensitiveness to workload variations and training sample size. Finally, ML algorithms are generally prone to service level agreements (SLA) violation under high load and time-varying channel conditions, implying that an SLA enforcement system is needed in ITU-T's 5 G ML framework.publishedVersionPeer reviewe

    Trends in Intelligent Communication Systems: Review of Standards, Major Research Projects, and Identification of Research Gaps

    Get PDF
    The increasing complexity of communication systems, following the advent of heterogeneous technologies, services and use cases with diverse technical requirements, provide a strong case for the use of artificial intelligence (AI) and data-driven machine learning (ML) techniques in studying, designing and operating emerging communication networks. At the same time, the access and ability to process large volumes of network data can unleash the full potential of a network orchestrated by AI/ML to optimise the usage of available resources while keeping both CapEx and OpEx low. Driven by these new opportunities, the ongoing standardisation activities indicate strong interest to reap the benefits of incorporating AI and ML techniques in communication networks. For instance, 3GPP has introduced the network data analytics function (NWDAF) at the 5G core network for the control and management of network slices, and for providing predictive analytics, or statistics, about past events to other network functions, leveraging AI/ML and big data analytics. Likewise, at the radio access network (RAN), the O-RAN Alliance has already defined an architecture to infuse intelligence into the RAN, where closed-loop control models are classified based on their operational timescale, i.e., real-time, near real-time, and non-real-time RAN intelligent control (RIC). Different from the existing related surveys, in this review article, we group the major research studies in the design of model-aided ML-based transceivers following the breakdown suggested by the O-RAN Alliance. At the core and the edge networks, we review the ongoing standardisation activities in intelligent networking and the existing works cognisant of the architecture recommended by 3GPP and ETSI. We also review the existing trends in ML algorithms running on low-power micro-controller units, known as TinyML. We conclude with a summary of recent and currently funded projects on intelligent communications and networking. This review reveals that the telecommunication industry and standardisation bodies have been mostly focused on non-real-time RIC, data analytics at the core and the edge, AI-based network slicing, and vendor inter-operability issues, whereas most recent academic research has focused on real-time RIC. In addition, intelligent radio resource management and aspects of intelligent control of the propagation channel using reflecting intelligent surfaces have captured the attention of ongoing research projects
    corecore