137 research outputs found
DESiRED -- Dynamic, Enhanced, and Smart iRED: A P4-AQM with Deep Reinforcement Learning and In-band Network Telemetry
Active Queue Management (AQM) is a mechanism employed to alleviate transient
congestion in network device buffers, such as routers and switches. Traditional
AQM algorithms use fixed thresholds, like target delay or queue occupancy, to
compute random packet drop probabilities. A very small target delay can
increase packet losses and reduce link utilization, while a large target delay
may increase queueing delays while lowering drop probability. Due to dynamic
network traffic characteristics, where traffic fluctuations can lead to
significant queue variations, maintaining a fixed threshold AQM may not suit
all applications. Consequently, we explore the question: \textit{What is the
ideal threshold (target delay) for AQMs?} In this work, we introduce DESiRED
(Dynamic, Enhanced, and Smart iRED), a P4-based AQM that leverages precise
network feedback from In-band Network Telemetry (INT) to feed a Deep
Reinforcement Learning (DRL) model. This model dynamically adjusts the target
delay based on rewards that maximize application Quality of Service (QoS). We
evaluate DESiRED in a realistic P4-based test environment running an MPEG-DASH
service. Our findings demonstrate up to a 90x reduction in video stall and a
42x increase in high-resolution video playback quality when the target delay is
adjusted dynamically by DESiRED.Comment: Preprint (Computer Networks under review
Experimental Demonstration of Partially Disaggregated Optical Network Control Using the Physical Layer Digital Twin
Optical communications and networking are fast becoming the solution to support ever-increasing data traffic across all segments of the network, expanding from core/metro networks to 5G/6G front-hauling. Therefore, optical networks need to evolve towards an efficient exploitation of the infrastructure by overcoming the closed and aggregated paradigm, to enable apparatus sharing together with the slicing and separation of the optical data plane from the optical control. In addition to the advantages in terms of efficiency and cost reduction, this evolution will increase network reliability, also allowing for a fine trade-off between robustness and maximum capacity exploitation. In this work, an optical network architecture is presented based on the physical layer digital twin of the optical transport used within a multi-layer hierarchical control operated by an intent-based network operating system. An experimental proof of concept is performed on a three-node network including up to 1000 km optical transmission, open re-configurable optical add & drop multiplexers (ROADMs) and whitebox transponders hosting pluggable multirate transceivers. The proposed solution is based on GNPy as the optical physical layer digital twin and ONOS as intent-based network operating system. The reliability of the optical control decoupled by the data plane functioning is experimentally demonstrated exploiting GNPy as open lightpath computation engine and software optical amplifier models derived from the component characterization. Besides the lightpath deployment exploiting the modulation format evaluation given a generic traffic request, the architecture reliability is tested mimicking the use case of an automatic failure recovery from a fiber cut
Stats 101 in P4: Towards In-Switch Anomaly Detection
Data plane programmability is greatly improving network monitoring. Most new proposals rely on controllers pulling information (e.g., sketches or packets) from the data plane. This architecture is not a good fit for tasks requiring high reactivity, such as failure recovery, attack mitigation, and so on. Focusing on these tasks, we argue for a different architecture, where the data plane autonomously detects anomalies and pushes alerts to the controller. As a first step, we demonstrate that statistical checks can be implemented in P4 by revisiting definition and online computation of statistical measures. We collect our techniques in a P4 library, and showcase how they enable in-switch anomaly detection
Advancing SDN from OpenFlow to P4: a survey
Software-defined Networking (SDN) marked the beginning of a new era in the field of networking by decoupling the control and forwarding processes through the OpenFlow protocol. The Next Generation SDN is defined by Open Interfaces and full programmability of the data plane. P4 is a domain-specific language that fulfills these requirements and has known wide adoption over recent years from Academia and Industry. This work is an extensive survey of the P4 language covering domains of application, a detailed overview of the language, and future directions
AI-native Interconnect Framework for Integration of Large Language Model Technologies in 6G Systems
The evolution towards 6G architecture promises a transformative shift in
communication networks, with artificial intelligence (AI) playing a pivotal
role. This paper delves deep into the seamless integration of Large Language
Models (LLMs) and Generalized Pretrained Transformers (GPT) within 6G systems.
Their ability to grasp intent, strategize, and execute intricate commands will
be pivotal in redefining network functionalities and interactions. Central to
this is the AI Interconnect framework, intricately woven to facilitate
AI-centric operations within the network. Building on the continuously evolving
current state-of-the-art, we present a new architectural perspective for the
upcoming generation of mobile networks. Here, LLMs and GPTs will
collaboratively take center stage alongside traditional pre-generative AI and
machine learning (ML) algorithms. This union promises a novel confluence of the
old and new, melding tried-and-tested methods with transformative AI
technologies. Along with providing a conceptual overview of this evolution, we
delve into the nuances of practical applications arising from such an
integration. Through this paper, we envisage a symbiotic integration where AI
becomes the cornerstone of the next-generation communication paradigm, offering
insights into the structural and functional facets of an AI-native 6G network
SDN-enabled Resource Provisioning Framework for Geo-Distributed Streaming Analytics
Geographically distributed (geo-distributed) datacenters for stream data processing typically comprise multiple edges and core datacenters connected through Wide-Area Network (WAN) with a master node responsible for allocating tasks to worker nodes. Since WAN links significantly impact the performance of distributed task execution, the existing task assignment approach is unsuitable for distributed stream data processing with low latency and high throughput demand. In this paper, we propose SAFA, a resource provisioning framework using the Software-Defined Networking (SDN) concept with an SDN controller responsible for monitoring the WAN, selecting an appropriate subset of worker nodes, and assigning tasks to the designated worker nodes. We implemented the data plane of the framework in P4 and the control plane components in Python. We tested the performance of the proposed system on Apache Spark, Apache Storm, and Apache Flink using the Yahoo! streaming benchmark on a set of custom topologies. The results of the experiments validate that the proposed approach is viable for distributed stream processing and confirm that it can improve at least 1.64Ă— the processing time of incoming events of the current stream processing systems.</p
View on 5G Architecture: Version 2.0
The 5G Architecture Working Group as part of the 5GPPP Initiative is looking at capturing novel trends and key technological enablers for the realization of the 5G architecture. It also targets at presenting in a harmonized way the architectural concepts developed in various projects and initiatives (not limited to 5GPPP projects only) so as to provide a consolidated view on the technical directions for the architecture design in the 5G era. The first version of the white paper was released in July 2016, which captured novel trends and key technological enablers for the realization of the 5G architecture vision along with harmonized architectural concepts from 5GPPP Phase 1 projects and initiatives. Capitalizing on the architectural vision and framework set by the first version of the white paper, this Version 2.0 of the white paper presents the latest findings and analyses with a particular focus on the concept evaluations, and accordingly it presents the consolidated overall architecture design
Resource Management in Softwarized Networks
Communication networks are undergoing a major transformation through softwarization, which is changing the way networks are designed, operated, and managed. Network Softwarization is an emerging paradigm where software controls the treatment of network flows, adds value to these flows by software processing, and orchestrates the on-demand creation of customized networks to meet the needs of customer applications. Software-Defined Networking (SDN), Network Function Virtualization (NFV), and Network Virtualization are three cornerstones of the overall transformation trend toward network softwarization. Together, they are empowering network operators to accelerate time-to-market for new services, diversify the supply chain for networking hardware and software, bringing the benefits of agility, economies of scale, and flexibility of cloud computing to networks. The enhanced programmability enabled by softwarization creates unique opportunities for adapting network resources in support of applications and users with diverse requirements. To effectively leverage the flexibility provided by softwarization and realize its full potential, it is of paramount importance to devise proper mechanisms for allocating resources to different applications and users and for monitoring their usage over time.
The overarching goal of this dissertation is to advance state-of-the-art in how resources are allocated and monitored and build the foundation for effective resource management in softwarized networks. Specifically, we address four resource management challenges in three key enablers of network softwarization, namely SDN, NFV, and network virtualization. First, we challenge the current practice of realizing network services with monolithic software network functions and propose a microservice-based disaggregated architecture enabling finer-grained resource allocation and scaling. Then, we devise optimal solutions and scalable heuristics for establishing virtual networks with guaranteed bandwidth and guaranteed survivability against failure on multi-layer IP-over-Optical and single-layer IP substrate network, respectively. Finally, we propose adaptive sampling mechanisms for balancing the overhead of softwarized network monitoring and the accuracy of the network view constructed from monitoring data
Understanding O-RAN: Architecture, Interfaces, Algorithms, Security, and Research Challenges
The Open Radio Access Network (RAN) and its embodiment through the O-RAN
Alliance specifications are poised to revolutionize the telecom ecosystem.
O-RAN promotes virtualized RANs where disaggregated components are connected
via open interfaces and optimized by intelligent controllers. The result is a
new paradigm for the RAN design, deployment, and operations: O-RAN networks can
be built with multi-vendor, interoperable components, and can be
programmatically optimized through a centralized abstraction layer and
data-driven closed-loop control. Therefore, understanding O-RAN, its
architecture, its interfaces, and workflows is key for researchers and
practitioners in the wireless community. In this article, we present the first
detailed tutorial on O-RAN. We also discuss the main research challenges and
review early research results. We provide a deep dive of the O-RAN
specifications, describing its architecture, design principles, and the O-RAN
interfaces. We then describe how the O-RAN RAN Intelligent Controllers (RICs)
can be used to effectively control and manage 3GPP-defined RANs. Based on this,
we discuss innovations and challenges of O-RAN networks, including the
Artificial Intelligence (AI) and Machine Learning (ML) workflows that the
architecture and interfaces enable, security and standardization issues.
Finally, we review experimental research platforms that can be used to design
and test O-RAN networks, along with recent research results, and we outline
future directions for O-RAN development.Comment: 33 pages, 16 figures, 3 tables. Submitted for publication to the IEE
- …