64 research outputs found

    Acknowledge-Based Non-Congestion Estimation: An Indirect Queue Management Approach for Concurrent TCP and UDP-Like Flows

    Get PDF
    This paper presents a new approach for indirect Active Queue Management (indirect AQM) technique called Acknowledge-based Non-Congestion Estimation (ANCE), which employs end-to-end queue management along a network instead to use Explicit Congestion Notification (ECN) bit or to drop packets in the queue. The ANCE performance was compared with Random Early Detection (RED), Control Delay (CoDel), Proportional Integral controller Enhanced (PIE), Explicit Non-Congestion Notification (ENCN), TCP-Jersey and E-DCTCP schemes in a daisychain and in a dumbbell cenario, with TCP flows and UDP-like Networked Control Systems (NCS) flow sharing the same network topology. On the other hand, this paper presents a method for modeling, simulation and verification of communication systems and NCS, using UPPAAL software tool, on which, all network components (channels, routers, transmitters, receivers, plants, and Controllers) were modeled using timed automata making easy a formal verification of the whole modeled system. Simulations and statistical verification show that despite using fewer resources (since ANCE does not need the ECN bit) ANCE presents a very close performance  to ENCN overcoming Drop Tail, RED, CoDel, PIE and E-DCTCP in terms of Integral Time Absolute Error (ITAE) for NCS and fairness for TCP flows. ANCE also attains better performance than RED, PIE, TCP-Jersey and E-DCTCP in terms of throughput for TCP flows

    Evaluation of Active Queue Management (AQM) Models in Low Latency Networks

    Get PDF
    Abstract: Low latency networks require the modification of the actual queuing management in order to avoid large queuing delay. Nowadays, TCP’s congestion control maximizes the throughput of the link providing benefits to large flow packets. However, nodes’ buffers may get fully filled, which would produce large time delays and packet dropping situations, named as bufferbloat problem. For actual time-sensitive applications demand, such as VoIP, online gaming or financial trading, these queueing times cause bad quality of service being directly noticed in user’s utilization. This work studies the different alternatives for active queue management (AQM) in the nodes links, optimizing the latency of the small flow packets and, therefore, providing better quality for low latency networks in congestion scenarios. AQM models are simulated in a dumbbell topology with ns3 software, which shows the diverse latency values (measured in RTT) according to network situations and the algorithm that has been installed. In detail, RED, CoDel, PIE, and FQ_CoDel algorithms are studied, plus the modification of the TCP sender’s congestion control with Alternative Backoff with ECN (ABE) algorithm. The simulations will display the best queueing times for the implementation that mixes FQ_CoDel with ABE, the one which maximizes the throughput reducing the latency of the packets. Thus, the modification of queueing management with FQ_CoDel and the implementation of ABE in the sender will solve the bufferbloat problem offering the required quality for low latency networks.Resumen Las redes de baja latencia requieren la modificación de la actual gestión de las colas con el fin de eludir los extensos tiempos de retardo. Hoy en d´ıa, el control de congestión de TCP maximiza el rendimiento (throughput) del enlace otorgando beneficio a los grandes flujos de datos, sin embargo, los buffers son plenamente cargados generando altos tiempos de retardo y fases de retirada de paquetes, llamada a esta situación el problema de Bufferbloat. Par las aplicaciones contempor´aneas como las llamadas VoIP, los juegos on-line o los intercambios financieros; estos tiempos de cola generan una mala calidad de servicio detectada directamente por los usuarios finales. Este trabajo estudia las diferentes alternativas de la gestión activa de colas (AQM), optimizando la latencia de los peque˜nos flujos y, por lo tanto, brindando una mejor calidad para las redes de baja latencia en situaciones de congestión. Los modelos AQM han sido evaluados en una topolog´ıa ’dumbbell’ mediante el simulador ns3, entregando resultados de latencia (medidos en RTT) de acuerdo con la situación del enlace y el algoritmo instalado en la cola. Concretamente, los algoritmos estudiados han sido RED, CoDel, PIE y FQ_CoDel; adem´as de la modificación del control de congestión TCP del emisor denominada ABE (Alternative Backoff with ECN). Las simulaciones que mejor resultados ofrecen son las que implementan combinación de FQ_CoDel con el algoritmo ABE, maximizando el rendimiento y reduciendo la latencia de los paquetes. Por lo tanto, la modificación con FQ_CoDel en las colas y la de ABE en el emisor ofrecen una solución al problema del Bufferbloat altamente solicitada por las redes de baja latencia

    Dual Queue Coupled AQM: Deployable Very Low Queuing Delay for All

    Full text link
    On the Internet, sub-millisecond queueing delay and capacity-seeking have traditionally been considered mutually exclusive. We introduce a service that offers both: Low Latency Low Loss Scalable throughput (L4S). When tested under a wide range of conditions emulated on a testbed using real residential broadband equipment, queue delay remained both low (median 100--300 μ\mus) and consistent (99th percentile below 2 ms even under highly dynamic workloads), without compromising other metrics (zero congestion loss and close to full utilization). L4S exploits the properties of `Scalable' congestion controls (e.g., DCTCP, TCP Prague). Flows using such congestion control are however very aggressive, which causes a deployment challenge as L4S has to coexist with so-called `Classic' flows (e.g., Reno, CUBIC). This paper introduces an architectural solution: `Dual Queue Coupled Active Queue Management', which enables balance between Scalable and Classic flows. It counterbalances the more aggressive response of Scalable flows with more aggressive marking, without having to inspect flow identifiers. The Dual Queue structure has been implemented as a Linux queuing discipline. It acts like a semi-permeable membrane, isolating the latency of Scalable and `Classic' traffic, but coupling their capacity into a single bandwidth pool. This paper justifies the design and implementation choices, and visualizes a representative selection of hundreds of thousands of experiment runs to test our claims.Comment: Preprint. 17pp, 12 Figs, 60 refs. Submitted to IEEE/ACM Transactions on Networkin

    iRED: A disaggregated P4-AQM fully implemented in programmable data plane hardware

    Full text link
    Routers employ queues to temporarily hold packets when the scheduler cannot immediately process them. Congestion occurs when the arrival rate of packets exceeds the processing capacity, leading to increased queueing delay. Over time, Active Queue Management (AQM) strategies have focused on directly draining packets from queues to alleviate congestion and reduce queuing delay. On Programmable Data Plane (PDP) hardware, AQMs traditionally reside in the Egress pipeline due to the availability of queue delay information there. We argue that this approach wastes the router's resources because the dropped packet has already consumed the entire pipeline of the device. In this work, we propose ingress Random Early Detection (iRED), a more efficient approach that addresses the Egress drop problem. iRED is a disaggregated P4-AQM fully implemented in programmable data plane hardware and also supports Low Latency, Low Loss, and Scalable Throughput (L4S) framework, saving device pipeline resources by dropping packets in the Ingress block. To evaluate iRED, we conducted three experiments using a Tofino2 programmable switch: i) An in-depth analysis of state-of-the-art AQMs on PDP hardware, using 12 different network configurations varying in bandwidth, Round-Trip Time (RTT), and Maximum Transmission Unit (MTU). The results demonstrate that iRED can significantly reduce router resource consumption, with up to a 10x reduction in memory usage, 12x fewer processing cycles, and 8x less power consumption for the same traffic load; ii) A performance evaluation regarding the L4S framework. The results prove that iRED achieves fairness in bandwidth usage for different types of traffic (classic and scalable); iii) A comprehensive analysis of the QoS in a real setup of a DASH) technology. iRED demonstrated up to a 2.34x improvement in FPS and a 4.77x increase in the video player buffer fill.Comment: Preprint (TNSM under review

    Managing Network Delay for Browser Multiplayer Games

    Get PDF
    Latency is one of the key performance elements affecting the quality of experience (QoE) in computer games. Latency in the context of games can be defined as the time between the user input and the result on the screen. In order for the QoE to be satisfactory the game needs to be able to react fast enough to player input. In networked multiplayer games, latency is composed of network delay and local delays. Some major sources of network delay are queuing delay and head-of-line (HOL) blocking delay. Network delay in the Internet can be even in the order of seconds. In this thesis we discuss what feasible networking solutions exist for browser multiplayer games. We conduct a literature study to analyze the Differentiated Services architecture, some salient Active Queue Management (AQM) algorithms (RED, PIE, CoDel and FQ-CoDel), the Explicit Congestion Notification (ECN) concept and network protocols for web browser (WebSocket, QUIC and WebRTC). RED, PIE and CoDel as single-queue implementations would be sub-optimal for providing low latency to game traffic. FQ-CoDel is a multi-queue AQM and provides flow separation that is able to prevent queue-building bulk transfers from notably hampering latency-sensitive flows. WebRTC Data-Channel seems promising for games since it can be used for sending arbitrary application data and it can avoid HOL blocking. None of the network protocols, however, provide completely satisfactory support for the transport needs of multiplayer games: WebRTC is not designed for client-server connections, QUIC is not designed for traffic patterns typical for multiplayer games and WebSocket would require parallel connections to mitigate the effects of HOL blocking
    corecore