17 research outputs found

    Observing the Evolution of QUIC Implementations

    Full text link
    The QUIC protocol combines features that were initially found inside the TCP, TLS and HTTP/2 protocols. The IETF is currently finalising a complete specification of this protocol. More than a dozen of independent implementations have been developed in parallel with these standardisation activities. We propose and implement a QUIC test suite that interacts with public QUIC servers to verify their conformance with key features of the IETF specification. Our measurements, gathered over a semester, provide a unique viewpoint on the evolution of a protocol and of its implementations. They highlight the arrival of new features and some regressions among the different implementations.Comment: 6 pages, 8 figure

    Adaptive Address Family Selection for Latency-Sensitive Applications on Dual-stack Hosts

    Full text link
    Latency is becoming a key factor of performance for Internet applications and has triggered a number of changes in its protocols. Our work revisits the impact on latency of address family selection in dual-stack hosts. Through RIPE Atlas measurements, we analyse the address families latency difference and establish two requirements based on our findings for a latency-focused selection mechanism. First, the address family should be chosen per destination. Second, the choice should be able to evolve over time dynamically. We propose and implement a solution formulated as an online learning problem balancing exploration and exploitation. We validate our solution in simulations based on RIPE Atlas measurements, implement and evaluate our prototype in four access networks using Chrome and popular web services. We demonstrate the ability of our solution to converge towards the lowest-latency address family and improve the latency of transport connections used by applications

    Revealing the Evolution of a Cloud Provider Through its Network Weather Map

    Full text link
    peer reviewedResearchers often face the lack of data on large operational networks to understand how they are used, how they behave, and sometimes how they fail. This data is crucial to drive the evolution of Internet protocols and develop techniques such as traffic engineering, DDoS detection and mitigation. Companies that have access to measurements from operational networks and services leverage this data to improve the availability, speed, and resilience of their Internet services. Unfortunately, the availability of large datasets, especially collected regularly over a long period of time, is a daunting task that remains scarce in the literature. We tackle this problem by releasing a dataset collected over roughly two years of observations of a major cloud company (OVH). Our dataset, called OVH Weather dataset, represents the evolution of more than 180 routers, 1,100 internal links, 500 external links, and their load percentages in the backbone network over time. Our dataset has a high density with snapshots taken every five minutes, totaling more than 500,000 files. In this paper, we also illustrate how our dataset could be used to study the backbone networks evolution. Finally, our dataset opens several exciting research questions that we make available to the research community

    A high-speed QUIC implementation

    No full text
    Several implementations of the QUIC protocol exist. Unfortunately, they generally lack behind TCP ones in terms of performance as TCP stacks have been undergoing years of optimizations. In this work, we propose picoquic-dpdk, a modified version of picoquic that bypasses the Linux kernel networking stack using the DPDK library, improving the throughput by a 3x factor. We compare our implementation against several QUIC stacks and TCP+TLS and demonstrate that it outperforms all tested QUIC stacks and matches TCP+TLS even with common TCP optimizations

    Verifying QUIC implementations using Ivy

    No full text
    QUIC is a new transport protocol combining the reliability and congestion control features of TCP with the security features of TLS. One of the main challenges with QUIC is to guarantee that any of its implementation follows the IETF specification. This challenge is particularly appealing as the specification is written in textual language, and hence may contain ambiguities. In a recent work, McMillan and Zuck proposed a formal representation of part of draft-18 of the IETF specification. They also showed that this representation made it possible to efficiently generate tests to stress four implementations of QUIC. Our first contribution is to complete and extend the formal representation from draft-18 to draft-29. Our second contribution is to test seven implementations of both QUIC client and server. Our last contribution is to show that our tool can highlight ambiguities in the QUIC specification, for which we suggest paths to corrections

    TCPLS: Modern Transport Services with TCP and TLS

    Full text link
    peer reviewedTCP and TLS are among the essential protocols in today's Internet. TCP ensures reliable data delivery while TLS secures the data transfer. Although they are very often used together, they have been designed independently following the Internet layered model. This paper demonstrates the various benefits that a closer integration between TCP and TLS would bring. By leveraging the extensible TLS 1.3 records, we combine TCP and TLS into TCPLS to build modern transport services such as multiplexing, connection migration, stream steering, and bandwidth aggregation. These services do not modify the TCP wire format and are resistant to middleboxes. TCPLS offers a powerful API enabling applications to precisely express the required transport services, ranging from a single-path single-stream connection to a multi-stream connection over several network paths, enabling choices between aggregated bandwidth and head-of-line blocking avoidance.Compared to MPTCP, our TCPLS prototype offers more control to the application and can be easily deployed as an extension to user-space TLS libraries, while being implemented at a low cost. Measurements demonstrate that it offers higher performance than existing QUIC libraries with a super set of transport services

    Observing the Evolution of QUIC Implementations

    No full text
    The QUIC protocol combines features that were initially found inside the TCP, TLS and HTTP/2 protocols. The IETF is currently finalising a complete specification of this protocol. More than a dozen of independent implementations have been developed in parallel with these standardisation activities. We propose and implement a QUIC test suite that interacts with public QUIC servers to verify their conformance with key features of the IETF specification. Our measurements, gathered over a semester, provide a unique viewpoint on the evolution of the QUIC protocol and of its implementations. They highlight the introduction of new features and some regressions among the different implementations

    Debugging QUIC and HTTP/3 with qlog and qvis

    No full text
    The QUIC and HTTP/3 protocols are powerful but complex and difficult to debug and analyse. Our previous work proposed the qlog format for structured endpoint logging to aid in taming this complexity. This follow-up study evaluates the real-world implementations, uses and deployments of qlog and our associated qvis tooling in academia and industry. Our survey among 28 QUIC experts shows high community involvement, while Facebook confirms qlog can handle Internet scale. Lessons learned from researching 16 QUIC+HTTP/3 and five TCP+TLS+HTTP/2 implementations demonstrate that qlog and qvis are essential tools for performing root-cause analysis when debugging modern Web protocols

    Towards a Collection of Packet Trace Interactive Exercises for Computer Networking Education

    No full text
    Students do not only learn by reading textbooks and listening to their professors. They also learn by answering questions challenging their knowledge and enabling them learn from their mistakes. In this paper, we present a set of automated exercises that empower students to learn in an online setting. These exercises take the form of cloze-like tests requiring students to fill in missing packet field values of several key Internet protocols. Those educational resources are open and free to use. We invite the computer networking teaching community to collaborate to add new exercises covering further key Internet protocols

    The multiple roles that IPv6 addresses can play in today's internet

    No full text
    The Internet use IP addresses to identify and locate network interfaces of connected devices. IPv4 was introduced more than 40 years ago and specifies 32-bit addresses. As the Internet grew, available IPv4 addresses eventually became exhausted more than ten years ago. The IETF designed IPv6 with a much larger addressing space consisting of 128-bit addresses, pushing back the exhaustion problem much further in the future. In this paper, we argue that this large addressing space allows reconsidering how IP addresses are used and enables improving, simplifying and scaling the Internet. By revisiting the IPv6 addressing paradigm, we demonstrate that it opens up several research opportunities that can be investigated today. Hosts can benefit from several IPv6 addresses to improve their privacy, defeat network scanning, improve the use of several mobile access network and their mobility as well as to increase the performance of multicore servers. Network operators can solve the multihoming problem more efficiently and without putting a burden on the BGP RIB, implement Function Chaining with Segment Routing, differentiate routing inside and outside a domain given particular network metrics and offer more fine-grained multicast services
    corecore