249 research outputs found

    A one-pass clustering based sketch method for network monitoring

    Get PDF
    Network monitoring solutions need to cope with increasing network traffic volumes, as a result, sketch-based monitoring methods have been extensively studied to trade accuracy for memory scalability and storage reduction. However, sketches are sensitive to skewness in network flow distributions due to hash collisions, and need complicated performance optimization to adapt to line-rate packet streams. We provide Jellyfish, an efficient sketch method that performs one-pass clustering over the network stream. One-pass clustering is realized by adapting the monitoring granularity from the whole network flow to fragments called subflows, which not only reduces the ingestion rate but also provides an efficient intermediate representation for the input to the sketch. Jellyfish provides the network-flow level query interface by reconstructing the network-flow level counters by merging subflow records from the same network flow. We provide probabilistic analysis of the expected accuracy of both existing sketch methods and Jellyfish. Real-world trace-driven experiments show that Jellyfish reduces the average estimation errors by up to six orders of magnitude for per-flow queries, by six orders of magnitude for entropy queries, and up to ten times for heavy-hitter queries.This work was supported in part by the National Natural Science Foundation of China (NSFC) under Grant 61972409; in part by Hong Kong Research Grants Council (RGC) under Grant TRS T41-603/20-R, Grant GRF-16213621, and Grant ITF ACCESS; in part by the Spanish I+D+i project TRAINER-A, funded by MCIN/AEI/10.13039/501100011033, under Grant PID2020-118011GB-C21; and in part by the Catalan Institution for Research and Advanced Studies (ICREA Academia).Peer ReviewedPostprint (author's final draft

    Revisiting the Classics: Online RL in the Programmable Dataplane

    Get PDF
    Data-driven networking is becoming more capable and widely researched, partly driven by the efficacy of Deep Reinforcement Learning (DRL) algorithms. Yet the complexity of both DRL inference and learning force these tasks to be pushed away from the dataplane to hosts, harming latency-sensitive applications. Online learning of such policies cannot occur in the dataplane, despite being useful techniques when problems evolve or are hard to model.We present OPaL—On Path Learning—the first work to bring online reinforcement learning to the dataplane. OPaL makes online learning possible in constrained SmartNIC hardware by returning to classical RL techniques—avoiding neural networks. Our design allows weak yet highly parallel SmartNIC NPUs to be competitive against commodity x86 hosts, despite having fewer features and slower cores. Compared to hosts, we achieve a 21 × reduction in 99.99th tail inference times to 34 µs, and 9.9 × improvement in online throughput for real-world policy designs. In-NIC execution eliminates PCIe transfers, and our asynchronous compute model ensures minimal impact on traffic carried by a co-hosted P4 dataplane. OPaL’s design scales with additional resources at compile-time to improve upon both decision latency and throughput, and is quickly reconfigurable at runtime compared to reinstalling device firmware

    Large-Scale Measurements and Prediction of DC-WAN Traffic

    Get PDF
    Large cloud service providers have built an increasing number of geo-distributed data centers (DCs) connected by Wide Area Networks (WANs). These DC-WANs carry both high-priority traffic from interactive services and low-priority traffic from bulk transfers. Given that a DC-WAN is an expensive resource, providers often manage it via traffic engineering algorithms that rely on accurate predictions of inter-DC high-priority (delay-sensitive) traffic. In this article, we perform a large-scale measurement study of high-priority inter-DC traffic from Baidu. We measure how inter-DC traffic varies across their global DC-WAN and show that most existing traffic prediction methods either cannot capture the complex traffic dynamics or overlook traffic interrelations among DCs. Building on our measurements, we propose the In terrelated- Te mporal G raph Convolutional Net work (IntegNet) model for inter-DC traffic prediction. In contrast to prior efforts, our model exploits both temporal traffic patterns and inferred co-dependencies between DC pairs. IntegNet forecasts the capacity needed for high-priority traffic demands by accounting for the balance between resource provisioning (i.e., allocating resources exceeding actual demand) and QoS losses (i.e., allocating fewer resources than actual demand). Our experiments show that IntegNet can keep a very limited QoS loss, while also reducing overprovisioning by up to 42.1% compared to the state-of-the-art and up to 66.2% compared to the traditional method used in DC-WAN traffic engineering

    SoC-Cluster as an Edge Server: an Application-driven Measurement Study

    Full text link
    Huge electricity consumption is a severe issue for edge data centers. To this end, we propose a new form of edge server, namely SoC-Cluster, that orchestrates many low-power mobile system-on-chips (SoCs) through an on-chip network. For the first time, we have developed a concrete SoC-Cluster server that consists of 60 Qualcomm Snapdragon 865 SoCs in a 2U rack. Such a server has been commercialized successfully and deployed in large scale on edge clouds. The current dominant workload on those deployed SoC-Clusters is cloud gaming, as mobile SoCs can seamlessly run native mobile games. The primary goal of this work is to demystify whether SoC-Cluster can efficiently serve more general-purpose, edge-typical workloads. Therefore, we built a benchmark suite that leverages state-of-the-art libraries for two killer edge workloads, i.e., video transcoding and deep learning inference. The benchmark comprehensively reports the performance, power consumption, and other application-specific metrics. We then performed a thorough measurement study and directly compared SoC-Cluster with traditional edge servers (with Intel CPU and NVIDIA GPU) with respect to physical size, electricity, and billing. The results reveal the advantages of SoC-Cluster, especially its high energy efficiency and the ability to proportionally scale energy consumption with various incoming loads, as well as its limitations. The results also provide insightful implications and valuable guidance to further improve SoC-Cluster and land it in broader edge scenarios

    A Comprehensive Study on Off-path SmartNIC

    Full text link
    SmartNIC has recently emerged as an attractive device to accelerate distributed systems. However, there has been no comprehensive characterization of SmartNIC especially on the network part. This paper presents the first comprehensive study of off-path SmartNIC. Our experimental study uncovers the key performance characteristics of the communication among the client, SmartNIC SoC, and the host. We find without considering SmartNIC hardware architecture, communications with it can cause up to 48% bandwidth degradation due to performance anomalies. We also propose implications to address the anomalies.Comment: This is the short version. Full version will appear at OSDI2

    PISketch: Finding Persistent and Infrequent Flows

    Get PDF

    Even lower latency in IIoT: evaluation of QUIC in industrial IoT scenarios

    Get PDF
    In this paper we analyze the performance of QUIC as a transport alternative for Internet of Things (IoT) services based on the Message Queuing Telemetry Protocol (MQTT). QUIC is a novel protocol promoted by Google, and was originally conceived to tackle the limitations of the traditional Transmission Control Protocol (TCP), specifically aiming at the reduction of the latency caused by connection establishment. QUIC use in IoT environments is not widespread, and it is therefore interesting to characterize its performance when in over such scenarios. We used an emulation-based platform, where we integrated QUIC and MQTT (using GO-based implementations) and compared their combined performance with the that exhibited by the traditional TCP/TLS approach. We used Linux containers as end devices, and the ns-3 simulator to emulate different network technologies, such as WiFi, cellular, and satellite, and varying conditions. The results evince that QUIC is indeed an appropriate protocol to guarantee robust, secure, and low latency communications over IoT scenarios.The authors are grateful for the funding of the Industrial Doctorates Program from the University of Cantabria (Call 2020). This work has been partially supported by the Basque Government through the Elkartek program under the DIGITAL project (grant agreement number KK-2019/00095), and by the Spanish Government (Ministerio de EconomĂ­a y Competitividad, Fondo Europeo de Desarrollo Regional, FEDER) by means of the project FIERCE: Future Internet Enabled Resilient smart CitiEs (RTI2018-093475-AI00)

    Online learning on the programmable dataplane

    Get PDF
    This thesis makes the case for managing computer networks with datadriven methods automated statistical inference and control based on measurement data and runtime observations—and argues for their tight integration with programmable dataplane hardware to make management decisions faster and from more precise data. Optimisation, defence, and measurement of networked infrastructure are each challenging tasks in their own right, which are currently dominated by the use of hand-crafted heuristic methods. These become harder to reason about and deploy as networks scale in rates and number of forwarding elements, but their design requires expert knowledge and care around unexpected protocol interactions. This makes tailored, per-deployment or -workload solutions infeasible to develop. Recent advances in machine learning offer capable function approximation and closed-loop control which suit many of these tasks. New, programmable dataplane hardware enables more agility in the network— runtime reprogrammability, precise traffic measurement, and low latency on-path processing. The synthesis of these two developments allows complex decisions to be made on previously unusable state, and made quicker by offloading inference to the network. To justify this argument, I advance the state of the art in data-driven defence of networks, novel dataplane-friendly online reinforcement learning algorithms, and in-network data reduction to allow classification of switchscale data. Each requires co-design aware of the network, and of the failure modes of systems and carried traffic. To make online learning possible in the dataplane, I use fixed-point arithmetic and modify classical (non-neural) approaches to take advantage of the SmartNIC compute model and make use of rich device local state. I show that data-driven solutions still require great care to correctly design, but with the right domain expertise they can improve on pathological cases in DDoS defence, such as protecting legitimate UDP traffic. In-network aggregation to histograms is shown to enable accurate classification from fine temporal effects, and allows hosts to scale such classification to far larger flow counts and traffic volume. Moving reinforcement learning to the dataplane is shown to offer substantial benefits to stateaction latency and online learning throughput versus host machines; allowing policies to react faster to fine-grained network events. The dataplane environment is key in making reactive online learning feasible—to port further algorithms and learnt functions, I collate and analyse the strengths of current and future hardware designs, as well as individual algorithms

    Per-host DDoS mitigation by direct-control reinforcement learning

    Get PDF
    DDoS attacks plague the availability of online services today, yet like many cybersecurity problems are evolving and non-stationary. Normal and attack patterns shift as new protocols and applications are introduced, further compounded by burstiness and seasonal variation. Accordingly, it is difficult to apply machine learning-based techniques and defences in practice. Reinforcement learning (RL) may overcome this detection problem for DDoS attacks by managing and monitoring consequences; an agent’s role is to learn to optimise performance criteria (which are always available) in an online manner. We advance the state-of-the-art in RL-based DDoS mitigation by introducing two agent classes designed to act on a per-flow basis, in a protocol-agnostic manner for any network topology. This is supported by an in-depth investigation of feature suitability and empirical evaluation. Our results show the existence of flow features with high predictive power for different traffic classes, when used as a basis for feedback-loop-like control. We show that the new RL agent models can offer a significant increase in goodput of legitimate TCP traffic for many choices of host density
    • …
    corecore