21 research outputs found

    DESiRED -- Dynamic, Enhanced, and Smart iRED: A P4-AQM with Deep Reinforcement Learning and In-band Network Telemetry

    Full text link
    Active Queue Management (AQM) is a mechanism employed to alleviate transient congestion in network device buffers, such as routers and switches. Traditional AQM algorithms use fixed thresholds, like target delay or queue occupancy, to compute random packet drop probabilities. A very small target delay can increase packet losses and reduce link utilization, while a large target delay may increase queueing delays while lowering drop probability. Due to dynamic network traffic characteristics, where traffic fluctuations can lead to significant queue variations, maintaining a fixed threshold AQM may not suit all applications. Consequently, we explore the question: \textit{What is the ideal threshold (target delay) for AQMs?} In this work, we introduce DESiRED (Dynamic, Enhanced, and Smart iRED), a P4-based AQM that leverages precise network feedback from In-band Network Telemetry (INT) to feed a Deep Reinforcement Learning (DRL) model. This model dynamically adjusts the target delay based on rewards that maximize application Quality of Service (QoS). We evaluate DESiRED in a realistic P4-based test environment running an MPEG-DASH service. Our findings demonstrate up to a 90x reduction in video stall and a 42x increase in high-resolution video playback quality when the target delay is adjusted dynamically by DESiRED.Comment: Preprint (Computer Networks under review

    Survey of context provisioning middleware

    Get PDF
    In the scope of ubiquitous computing, one of the key issues is the awareness of context, which includes diverse aspects of the user's situation including his activities, physical surroundings, location, emotions and social relations, device and network characteristics and their interaction with each other. This contextual knowledge is typically acquired from physical, virtual or logical sensors. To overcome problems of heterogeneity and hide complexity, a significant number of middleware approaches have been proposed for systematic and coherent access to manifold context parameters. These frameworks deal particularly with context representation, context management and reasoning, i.e. deriving abstract knowledge from raw sensor data. This article surveys not only related work in these three categories but also the required evaluation principles. © 2009-2012 IEEE

    A backhaul adaptation scheme for IAB networks using deep reinforcement learning with recursive discrete choice model

    Get PDF
    Challenges such as backhaul availability and backhaul scalability have continued to outweigh the progress of integrated access and backhaul (IAB) networks that enable multi-hop backhauling in 5G networks. These challenges, which are predominant in poor wireless channel conditions such as foliage, may lead to high energy consumption and packet losses. It is essential that the IAB topology enables efficient traffic flow by minimizing congestion and increasing robustness to backhaul failure. This article proposes a backhaul adaptation scheme that is controlled by the load on the access side of the network. The routing problem is formulated as a constrained Markov decision process and solved using a dual decomposition approach due to the existence of explicit and implicit constraints. A deep reinforcement learning (DRL) strategy that takes advantage of a recursive discrete choice model (RDCM) was proposed and implemented in a knowledge-defined networking architecture of an IAB network. The incorporation of the RDCM was shown to improve robustness to backhaul failure in IAB networks. The performance of the proposed algorithm was compared to that of conventional DRL, i.e., without RDCM, and generative model-based learning (GMBL) algorithms. The simulation results of the proposed approach reveal risk perception by introducing certain biases on alternative choices and the results showed that the proposed algorithm provides better throughput and delay performance over the two baselines.The Sentech Chair in Broadband Wireless Multimedia Communications (BWMC) and the University of Pretoria.https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=6287639Electrical, Electronic and Computer Engineerin

    Network computations in artificial intelligence

    Get PDF

    Expanding Horizons: A Comprehensive Exploration of Robustness, Performance and Programmable Data Plane Routing in Next Generation Data Centers

    Get PDF
    In recent years, data center networks have garnered significant attention, rapidly scaling up to meet the demands of the explosive nature of current applications. One notable facet driving this expansion is the pivotal role these networks play in advancing artificial intelligence (AI) and machine learning (ML). As applications like natural language processing models (e.g., OpenAI's GPT series) and image recognition algorithms for autonomous vehicles continue to evolve, data center networks provide the essential computational infrastructure required for the training and deployment of these sophisticated AI and ML models. Lately, significant efforts have been dedicated to enhancing the performance of data center networks, particularly in comparison to the often performance-lagging standard Clos-based topologies like Fat-Tree. One of the approaches for performance improvement is to use alternative data center network topologies. Consequently, researchers explored topologies based on Expander Graphs (EGs), such as Jellyfish, Xpander, and STRAT, where they exploited the sparse and incremental nature of these new topologies. This thesis focuses on investigating the STructured Re-Arranged Topology (STRAT) as a potentially robust and efficient design for next-generation data centers. To benchmark STRAT's performance against the well-known Expander data centers, a robustness framework based on geometric and connectivity-based metrics, along with throughput metrics, is adopted. The findings reveal that STRAT outperforms well-known Expander architectures, positioning them as promising alternatives that surpass the performance of present Clos-based topologies. Moreover, such observations are validated through extensive flow and packet level simulations, demonstrating STRAT's superior performance as compared to other Expanders. Moreover, the evolution of modern network technology has witnessed a transformative shift with the advent of programmable switches, marking a paradigmatic leap in the realm of data center networks. The programmable data plane of ASIC switches, a cornerstone of this technological advancement, has emerged as a pivotal catalyst for unprecedented innovation and efficiency in data center networks. Its versatility becomes evident in diverse applications, such as employing ML for network classification, enabling dynamic routing mechanisms to achieve line-rate speeds, and implementing In-band Network Telemetry (INT) for enhanced network visibility at a granular level. These applications underscore the transformative power of the programmable data plane, transcending traditional limitations and ushering in a new era of adaptability and performance in data center networks. Building upon this foundation, this thesis introduces a novel routing algorithm that is meticulously prototyped on the BMv2 virtual programmable switch, leveraging the expressive capabilities of the P4 programming language. This implementation serves as a tangible demonstration of the intersection between routing strategies, Expander-based topologies, and the programmable data plane. Notably, the novel routing algorithm showcases superior performance improvements over traditional Equal-Cost Multi-Path (ECMP) algorithm, affirming its potential as a promising solution for harnessing the abundant path diversity inherent in the Expander next-generation data center topologies

    Kommunikation und Bildverarbeitung in der Automation

    Get PDF
    In diesem Open Access-Tagungsband sind die besten Beiträge des 11. Jahreskolloquiums "Kommunikation in der Automation" (KommA 2020) und des 7. Jahreskolloquiums "Bildverarbeitung in der Automation" (BVAu 2020) enthalten. Die Kolloquien fanden am 28. und 29. Oktober 2020 statt und wurden erstmalig als digitale Webveranstaltung auf dem Innovation Campus Lemgo organisiert. Die vorgestellten neuesten Forschungsergebnisse auf den Gebieten der industriellen Kommunikationstechnik und Bildverarbeitung erweitern den aktuellen Stand der Forschung und Technik. Die in den Beiträgen enthaltenen anschauliche Anwendungsbeispiele aus dem Bereich der Automation setzen die Ergebnisse in den direkten Anwendungsbezug

    URLLC for 5G and Beyond: Requirements, Enabling Incumbent Technologies and Network Intelligence

    Get PDF
    The tactile internet (TI) is believed to be the prospective advancement of the internet of things (IoT), comprising human-to-machine and machine-to-machine communication. TI focuses on enabling real-time interactive techniques with a portfolio of engineering, social, and commercial use cases. For this purpose, the prospective 5{th} generation (5G) technology focuses on achieving ultra-reliable low latency communication (URLLC) services. TI applications require an extraordinary degree of reliability and latency. The 3{rd} generation partnership project (3GPP) defines that URLLC is expected to provide 99.99% reliability of a single transmission of 32 bytes packet with a latency of less than one millisecond. 3GPP proposes to include an adjustable orthogonal frequency division multiplexing (OFDM) technique, called 5G new radio (5G NR), as a new radio access technology (RAT). Whereas, with the emergence of a novel physical layer RAT, the need for the design for prospective next-generation technologies arises, especially with the focus of network intelligence. In such situations, machine learning (ML) techniques are expected to be essential to assist in designing intelligent network resource allocation protocols for 5G NR URLLC requirements. Therefore, in this survey, we present a possibility to use the federated reinforcement learning (FRL) technique, which is one of the ML techniques, for 5G NR URLLC requirements and summarizes the corresponding achievements for URLLC. We provide a comprehensive discussion of MAC layer channel access mechanisms that enable URLLC in 5G NR for TI. Besides, we identify seven very critical future use cases of FRL as potential enablers for URLLC in 5G NR

    Kommunikation und Bildverarbeitung in der Automation

    Get PDF

    Applications

    Get PDF
    Volume 3 describes how resource-aware machine learning methods and techniques are used to successfully solve real-world problems. The book provides numerous specific application examples: in health and medicine for risk modelling, diagnosis, and treatment selection for diseases in electronics, steel production and milling for quality control during manufacturing processes in traffic, logistics for smart cities and for mobile communications

    An Empirical Analysis of Cyber Deception Systems

    Get PDF
    corecore