8 research outputs found
QUIC on the Highway: Evaluating Performance on High-rate Links
QUIC is a new protocol standardized in 2021 designed to improve on the widely
used TCP / TLS stack. The main goal is to speed up web traffic via HTTP, but it
is also used in other areas like tunneling. Based on UDP it offers features
like reliable in-order delivery, flow and congestion control, streambased
multiplexing, and always-on encryption using TLS 1.3. Other than with TCP, QUIC
implements all these features in user space, only requiring kernel interaction
for UDP. While running in user space provides more flexibility, it profits less
from efficiency and optimization within the kernel. Multiple implementations
exist, differing in programming language, architecture, and design choices.
This paper presents an extension to the QUIC Interop Runner, a framework for
testing interoperability of QUIC implementations. Our contribution enables
reproducible QUIC benchmarks on dedicated hardware. We provide baseline results
on 10G links, including multiple implementations, evaluate how OS features like
buffer sizes and NIC offloading impact QUIC performance, and show which data
rates can be achieved with QUIC compared to TCP. Our results show that QUIC
performance varies widely between client and server implementations from 90
Mbit/s to 4900 Mbit/s. We show that the OS generally sets the default buffer
size too small, which should be increased by at least an order of magnitude
based on our findings. Furthermore, QUIC benefits less from NIC offloading and
AES NI hardware acceleration while both features improve the goodput of TCP to
around 8000 Mbit/s. Our framework can be applied to evaluate the effects of
future improvements to the protocol or the OS.Comment: Presented at the 2023 IFIP Networking Conference (IFIP Networking
OSMOSIS: Enabling Multi-Tenancy in Datacenter SmartNICs
Multi-tenancy is essential for unleashing SmartNIC's potential in
datacenters. Our systematic analysis in this work shows that existing on-path
SmartNICs have resource multiplexing limitations. For example, existing
solutions lack multi-tenancy capabilities such as performance isolation and QoS
provisioning for compute and IO resources. Compared to standard NIC data paths
with a well-defined set of offloaded functions, unpredictable execution times
of SmartNIC kernels make conventional approaches for multi-tenancy and QoS
insufficient. We fill this gap with OSMOSIS, a SmartNICs resource manager
co-design. OSMOSIS extends existing OS mechanisms to enable dynamic hardware
resource multiplexing on top of the on-path packet processing data plane. We
implement OSMOSIS within an open-source RISC-V-based 400Gbit/s SmartNIC. Our
performance results demonstrate that OSMOSIS fully supports multi-tenancy and
enables broader adoption of SmartNICs in datacenters with low overhead.Comment: 12 pages, 14 figures, 103 reference
Reducing the acknowledgement frequency in IETF QUIC
Research Funding European Space Agency University of AberdeenPeer reviewedPublisher PD
Congestion control tunning of the QUIC transport layer protocol
The QUIC protocol is a new type of reliable transmission protocol based on UDP. Its establishment is mainly to solve the problem of network delay. It is efficient, fast, and takes up less resources. The QUIC gathers the advantages of both TCP and UDP. The first part of this thesis studies the dThe QUIC protocol is a new type of reliable transmission protocol based on UDP. Its establishment is mainly to solve the problem of network delay. It is efficient, fast, and takes up less resources. The QUIC gathers the advantages of both TCP and UDP
Online learning on the programmable dataplane
This thesis makes the case for managing computer networks with datadriven methods automated statistical inference and control based on measurement data and runtime observationsâand argues for their tight integration with programmable dataplane hardware to make management decisions faster and from more precise data. Optimisation, defence, and measurement of networked infrastructure are each challenging tasks in their own right, which are currently dominated by the use of hand-crafted heuristic methods. These become harder to reason about and deploy as networks scale in rates and number of forwarding elements, but their design requires expert knowledge and care around unexpected protocol interactions. This makes tailored, per-deployment or -workload solutions infeasible to develop. Recent advances in machine learning offer capable function approximation and closed-loop control which suit many of these tasks. New, programmable dataplane hardware enables more agility in the networkâ runtime reprogrammability, precise traffic measurement, and low latency on-path processing. The synthesis of these two developments allows complex decisions to be made on previously unusable state, and made quicker by offloading inference to the network.
To justify this argument, I advance the state of the art in data-driven defence of networks, novel dataplane-friendly online reinforcement learning algorithms, and in-network data reduction to allow classification of switchscale data. Each requires co-design aware of the network, and of the failure modes of systems and carried traffic. To make online learning possible in the dataplane, I use fixed-point arithmetic and modify classical (non-neural) approaches to take advantage of the SmartNIC compute model and make use of rich device local state. I show that data-driven solutions still require great care to correctly design, but with the right domain expertise they can improve on pathological cases in DDoS defence, such as protecting legitimate UDP traffic. In-network aggregation to histograms is shown to enable accurate classification from fine temporal effects, and allows hosts to scale such classification to far larger flow counts and traffic volume. Moving reinforcement learning to the dataplane is shown to offer substantial benefits to stateaction latency and online learning throughput versus host machines; allowing policies to react faster to fine-grained network events. The dataplane environment is key in making reactive online learning feasibleâto port further algorithms and learnt functions, I collate and analyse the strengths of current and future hardware designs, as well as individual algorithms
Making QUIC Quicker with NIC Offload
This paper aims at defining the right set of primitives a NIC shall expose to efficiently offload the QUIC protocol. Although previous work already partially tackled this problem, it has only considered one specific aspect: The crypto module. We instead dissect different QUIC implementations, and perform an in-depth analysis of the cost associated to many of its components. We find that the kernel to userspace communication, the crypto module and the packet reordering algorithm are CPU hungry and often the cause of application performance degradation. We use those findings to define an architecture for offloading QUIC and discuss the associated challenges
Energy-efficient Transitional Near-* Computing
Studies have shown that communication networks, devices accessing the Internet, and data centers account for 4.6% of the worldwide electricity consumption.
Although data centers, core network equipment, and mobile devices are getting more energy-efficient, the amount of data that is being processed, transferred, and stored is vastly increasing.
Recent computer paradigms, such as fog and edge computing, try to improve this situation by processing data near the user, the network, the devices, and the data itself.
In this thesis, these trends are summarized under the new term near-* or near-everything computing.
Furthermore, a novel paradigm designed to increase the energy efficiency of near-* computing is proposed: transitional computing.
It transfers multi-mechanism transitions, a recently developed paradigm for a highly adaptable future Internet, from the field of communication systems to computing systems.
Moreover, three types of novel transitions are introduced to achieve gains in energy efficiency in near-* environments, spanning from private Infrastructure-as-a-Service (IaaS) clouds, Software-defined Wireless Networks (SDWNs) at the edge of the network, Disruption-Tolerant Information-Centric Networks (DTN-ICNs) involving mobile devices, sensors, edge devices as well as programmable components on a mobile System-on-a-Chip (SoC).
Finally, the novel idea of transitional near-* computing for emergency response applications is presented
to assist rescuers and affected persons during an emergency event or a disaster, although connections to cloud services and social networks might be disturbed by network outages, and network bandwidth and battery power of mobile devices might be limited