252 research outputs found

    Measuring the State of ECN Readiness in Servers, Clients, and Routers

    Get PDF
    Proceedings of the Eleventh ACM SIGCOMM/USENIX Internet Measurement Conference (IMC 2011), Berlin, DE, November 2011.Better exposing congestion can improve traffic management in the wide-area, at peering points, among residential broadband connections, and in the data center. TCP's network utilization and efficiency depends on congestion information, while recent research proposes economic and policy models based on congestion. Such motivations have driven widespread support of Explicit Congestion Notification (ECN) in modern operating systems. We reappraise the Internet's ECN readiness, updating and extending previous measurements. Across large and diverse server populations, we find a three-fold increase in ECN support over prior studies. Using new methods, we characterize ECN within mobile infrastructure and at the client-side, populations previously unmeasured. Via large-scale path measurements, we find the ECN feedback loop failing in the core of the network 40\% of the time, typically at AS boundaries. Finally, we discover new examples of infrastructure violating ECN Internet standards, and discuss remaining impediments to running ECN while suggesting mechanisms to aid adoption

    User-Centric Quality of Service Provisioning in IP Networks

    Get PDF
    The Internet has become the preferred transport medium for almost every type of communication, continuing to grow, both in terms of the number of users and delivered services. Efforts have been made to ensure that time sensitive applications receive sufficient resources and subsequently receive an acceptable Quality of Service (QoS). However, typical Internet users no longer use a single service at a given point in time, as they are instead engaged in a multimedia-rich experience, comprising of many different concurrent services. Given the scalability problems raised by the diversity of the users and traffic, in conjunction with their increasing expectations, the task of QoS provisioning can no longer be approached from the perspective of providing priority to specific traffic types over coexisting services; either through explicit resource reservation, or traffic classification using static policies, as is the case with the current approach to QoS provisioning, Differentiated Services (Diffserv). This current use of static resource allocation and traffic shaping methods reveals a distinct lack of synergy between current QoS practices and user activities, thus highlighting a need for a QoS solution reflecting the user services. The aim of this thesis is to investigate and propose a novel QoS architecture, which considers the activities of the user and manages resources from a user-centric perspective. The research begins with a comprehensive examination of existing QoS technologies and mechanisms, arguing that current QoS practises are too static in their configuration and typically give priority to specific individual services rather than considering the user experience. The analysis also reveals the potential threat that unresponsive application traffic presents to coexisting Internet services and QoS efforts, and introduces the requirement for a balance between application QoS and fairness. This thesis proposes a novel architecture, the Congestion Aware Packet Scheduler (CAPS), which manages and controls traffic at the point of service aggregation, in order to optimise the overall QoS of the user experience. The CAPS architecture, in contrast to traditional QoS alternatives, places no predetermined precedence on a specific traffic; instead, it adapts QoS policies to each individual’s Internet traffic profile and dynamically controls the ratio of user services to maintain an optimised QoS experience. The rationale behind this approach was to enable a QoS optimised experience to each Internet user and not just those using preferred services. Furthermore, unresponsive bandwidth intensive applications, such as Peer-to-Peer, are managed fairly while minimising their impact on coexisting services. The CAPS architecture has been validated through extensive simulations with the topologies used replicating the complexity and scale of real-network ISP infrastructures. The results show that for a number of different user-traffic profiles, the proposed approach achieves an improved aggregate QoS for each user when compared with Best effort Internet, Traditional Diffserv and Weighted-RED configurations. Furthermore, the results demonstrate that the proposed architecture not only provides an optimised QoS to the user, irrespective of their traffic profile, but through the avoidance of static resource allocation, can adapt with the Internet user as their use of services change.France Teleco

    Reducing Internet Latency : A Survey of Techniques and their Merit

    Get PDF
    Bob Briscoe, Anna Brunstrom, Andreas Petlund, David Hayes, David Ros, Ing-Jyh Tsang, Stein Gjessing, Gorry Fairhurst, Carsten Griwodz, Michael WelzlPeer reviewedPreprin

    Re-architecting datacenter networks and stacks for low latency and high performance

    Get PDF
    © 2017 ACM. Modern datacenter networks provide very high capacity via redundant Clos topologies and low switch latency, but transport protocols rarely deliver matching performance. We present NDP, a novel datacenter transport architecture that achieves near-optimal completion times for short transfers and high flow throughput in a wide range of scenarios, including incast. NDP switch buffers are very shallow and when they fill the switches trim packets to headers and priority forward the headers. This gives receivers a full view of instantaneous demand from all senders, and is the basis for our novel, high-performance, multipath-aware transport protocol that can deal gracefully with massive incast events and prioritize traffic from different senders on RTT timescales. We implemented NDP in Linux hosts with DPDK, in a software switch, in a NetFPGA-based hardware switch, and in P4. We evaluate NDP's performance in our implementations and in large-scale simulations, simultaneously demonstrating support for very low-latency and high throughput.This work was partly funded by the SSICLOPS H2020 project (644866)

    Defeating Protocol Abuse with P4: Application to Explicit Congestion Notification

    Get PDF
    International audienceIn recent years, programmable data planes enabled by the protocol independent switch architecture (PISA) allowed the relocation of network functions closer to traffic flows and thereby the ability to react in real-time to network events. However , expressing complex and stateful network monitoring functions using state-of-the-art data plane programming languages such as P4 still remain challenging. In this context, we propose a method for modeling a stateful security monitoring function as an Extended Finite State Machine (EFSM) and express the EFSM using P4 language abstractions. We demonstrate the feasibility and benefit of our proposed approach in detecting and mitigating Explicit Congestion Notification (ECN) protocol abuse without any TCP protocol modification. Our evaluation shows that the proposed security monitoring function can restore 24.67% throughput loss caused by misbehaving TCP end-hosts while ensuring fair share of bandwidth among TCP flows

    Mitigating TCP Protocol Misuse With Programmable Data Planes

    Get PDF
    International audienceThis paper proposes a new approach for detecting and mitigating the impact of misbehaving TCP end-hosts, specifically the Optimistic ACK attack, and Explicit Congestion Notification (ECN) abuse. In contrast to the state-of-the-art, we show that it is possible to mitigate such misbehavior leveraging emerging programmable data planes while not requiring any end-host or protocol modifications. A key challenge in doing so is to implement expressive, complex and stateful functions in the data plane within its restricted programming model. In this regard, we propose a security monitoring function that uses Extended Finite State Machine (EFSM) abstraction for monitoring stateful protocols in the data plane. We also design a mechanism for mapping a protocol's EFSM to programmable data plane primitives. Our evaluation results demonstrate that our approach can fully or partially restore the throughput loss caused by misbehaving end-hosts that manipulate TCP congestion control through misinformation

    Re-architecting datacenter networks and stacks for low latency and high performance

    Get PDF
    Modern datacenter networks provide very high capacity via redundant Clos topologies and low switch latency, but transport protocols rarely deliver matching performance. We present NDP, a novel data-center transport architecture that achieves near-optimal completion times for short transfers and high flow throughput in a wide range of scenarios, including incast. NDP switch buffers are very shallow and when they fill the switches trim packets to headers and priority forward the headers. This gives receivers a full view of instantaneous demand from all senders, and is the basis for our novel, high-performance, multipath-aware transport protocol that can deal gracefully with massive incast events and prioritize traffic from different senders on RTT timescales. We implemented NDP in Linux hosts with DPDK, in a software switch, in a NetFPGA-based hardware switch, and in P4. We evaluate NDP's performance in our implementations and in large-scale simulations, simultaneously demonstrating support for very low-latency and high throughput

    Dual-Mode Congestion Control Mechanism for Video Services

    Get PDF
    Recent studies have shown that video services represent over half of Internet traffic, with a growing trend. Therefore, video traffic plays a major role in network congestion. Currently on the Internet, congestion control is mainly implemented through overprovisioning and TCP congestion control. Although some video services use TCP to implement their transport services in a manner that actually works, TCP is not an ideal protocol for use by all video applications. For example, UDP is often considered to be more suitable for use by real-time video applications. Unfortunately, UDP does not implement congestion control. Therefore, these UDP-based video services operate without any kind of congestion control support unless congestion control is implemented on the application layer. There are also arguments against massive overprovisioning. Due to these factors, there is still a need to equip video services with proper congestion control.Most of the congestion control mechanisms developed for the use of video services can only offer either low priority or TCP-friendly real-time services. There is no single congestion control mechanism currently that is suitable and can be widely used for all kinds of video services. This thesis provides a study in which a new dual-mode congestion control mechanism is proposed. This mechanism can offer congestion control services for both service types. The mechanism includes two modes, a backward-loading mode and a real-time mode. The backward-loading mode works like a low-priority service where the bandwidth is given away to other connections once the load level of a network is high enough. In contrast, the real-time mode always demands its fair share of the bandwidth.The behavior of the new mechanism and its friendliness toward itself, and the TCP protocol, have been investigated by means of simulations and real network tests. It was found that this kind of congestion control approach could be suitable for video services. The new mechanism worked acceptably. In particular, the mechanism behaved toward itself in a very friendly way in most cases. The averaged TCP fairness was at a good level. In the worst cases, the faster connections received about 1.6 times as much bandwidth as the slower connections

    Improving the Quality of Real Time Media Applications through Sending the Best Packet Next

    Get PDF
    Real time media applications such as video conferencing are increasing in usage. These bandwidth intensive applications put high demands on a network and often the quality experienced by the user is sub-optimal. In a traditional network stack, data from an application is transmitted in the order that it is received. This thesis proposes a scheme called "Send the Best Packet Next (SBPN)" where the most important data is transmitted first and data that will not reach the receiver before an expiry time is not transmitted. In SBPN the packet priority and expiry time are added to a packet and used in conjunction with the Round Trip Time (RTT) to determine whether packets are sent, and in which order that they are sent. For example, it has been shown that audio is more important to users than video in video conferencing. SBPN could be considered to be Quality of Service (QoS) that is within an application data stream. This is in comparison to network routers that provide QoS to whole streams such as Voice over IP (VoIP), but do not differentiate between data items within the stream or which data gets transmitted by the end nodes. Implementation of SBPN can be done on the server only, so that much of the benefit for one way transmission (e.g. live television) can be gained without requiring existing clients to be changed. SBPN was implemented in a Linux kernel on top of Datagram Congestion Control Protocol (DCCP) and compared to existing solutions. This showed real improvement in the measured quality of audio with a maximum improvement of 15% in selected test scenarios
    corecore