29 research outputs found

    Relaxing state-access constraints in stateful programmable data planes

    Get PDF
    Supporting the programming of stateful packet forwarding functions in hardware has recently attracted the interest of the research community. When designing such switching chips, the challenge is to guarantee the ability to program functions that can read and modify data plane's state, while keeping line rate performance and state consistency. Current state-of-the-art designs are based on a very conservative all-or-nothing model: programmability is limited only to those functions that are guaranteed to sustain line rate, with any traffic workload. In effect, this limits the maximum time to execute state update operations. In this paper, we explore possible options to relax these constraints by using simulations on real traffic traces. We then propose a model in which functions can be executed in a larger but bounded time, while preventing data hazards with memory locking. We present results showing that such flexibility can be supported with little or no throughput degradation.Comment: 6 page

    Reproducible Host Networking Evaluation with End-to-End Simulation

    Get PDF
    Networking researchers are facing growing challenges in evaluating and reproducing results for modern network systems. As systems rely on closer integration of system components and cross-layer optimizations in the pursuit of performance and efficiency, they are also increasingly tied to specific hardware and testbed properties. Combined with a trend towards heterogeneous hardware, such as protocol offloads, SmartNICs, and in-network accelerators, researchers face the choice of either investing more and more time and resources into comparisons to prior work or, alternatively, lower the standards for evaluation. We aim to address this challenge by introducing SimBricks, a simulation framework that decouples networked systems from the physical testbed and enables reproducible end-to-end evaluation in simulation. Instead of reinventing the wheel, SimBricks is a modular framework for combining existing tried-and-true simulators for individual components, processor and memory, NIC, and network, into complete testbeds capable of running unmodified systems. In our evaluation, we reproduce key findings from prior work, including dctcp congestion control, NOPaxos in-network consensus acceleration, and the Corundum FPGA NIC.Comment: 15 pages, 10 figures, under submissio

    RPCValet: NI-Driven Tail-Aware Balancing of µs-Scale RPCs

    Get PDF
    Modern online services come with stringent quality requirements in terms of response time tail latency. Because of their decomposition into fine-grained communicating software layers, a single user request fans out into a plethora of short, μs-scale RPCs, aggravating the need for faster inter-server communication. In reaction to that need, we are witnessing a technological transition characterized by the emergence of hardware-terminated user-level protocols (e.g., InfiniBand/RDMA) and new architectures with fully integrated Network Interfaces (NIs). Such architectures offer a unique opportunity for a new NI-driven approach to balancing RPCs among the cores of manycore server CPUs, yielding major tail latency improvements for μs-scale RPCs. We introduce RPCValet, an NI-driven RPC load-balancing design for architectures with hardware-terminated protocols and integrated NIs, that delivers near-optimal tail latency. RPCValet's RPC dispatch decisions emulate the theoretically optimal single-queue system, without incurring synchronization overheads currently associated with single-queue implementations. Our design improves throughput under tight tail latency goals by up to 1.4x, and reduces tail latency before saturation by up to 4x for RPCs with μs-scale service times, as compared to current systems with hardware support for RPC load distribution. RPCValet performs within 15% of the theoretically optimal single-queue system

    From photons to big-data applications: terminating terabits

    Get PDF
    Computer architectures have entered a watershed as the quantity of network data generated by user applications exceeds the data-processing capacity of any individual computer end-system. It will become impossible to scale existing computer systems while a gap grows between the quantity of networked data and the capacity for per system data processing. Despite this, the growth in demand in both task variety and task complexity continues unabated. Networked computer systems provide a fertile environment in which new applications develop. As networked computer systems become akin to infrastructure, any limitation upon the growth in capacity and capabilities becomes an important constraint of concern to all computer users. Considering a networked computer system capable of processing terabits per second, as a benchmark for scalability, we critique the state of the art in commodity computing, and propose a wholesale reconsideration in the design of computer architectures and their attendant ecosystem. Our proposal seeks to reduce costs, save power and increase performance in a multi-scale approach that has potential application from nanoscale to data-centre-scale computers.This work was supported by the UK Engineering and Physical Sciences Research Council Internet Project EP/H040536/1. This work was supported by the Defense Advanced Research Projects Agency and the Air Force Research Laboratory, under contract FA8750-11-C-0249

    SABRes: Atomic Object Reads for In-Memory Rack-Scale Computing

    Get PDF
    Modern in-memory services rely on large distributed object stores to achieve the high scalability essential to service thousands of requests concurrently. The independent and unpredictable nature of incoming requests results in random accesses to the object store, triggering frequent remote memory accesses. State-of-the-art distributed memory frameworks leverage the one-sided operations offered by RDMA technology to mitigate the traditionally high cost of remote memory access. Unfortunately, the limited semantics of RDMA one-sided operations bound remote memory access atomicity to a single cache block; therefore, atomic remote object access relies on software mechanisms. Emerging highly integrated rack-scale systems that reduce the latency of one-sided operations to a small multiple of DRAM latency expose the overhead of these software mechanisms as a major latency contributor. This technology-triggered paradigm shift calls for new one-sided operations with stronger semantics. We take a step in that direction by proposing SABRes, a new one-sided operation that provides atomic remote object reads in hardware. We then present LightSABRes, a lightweight hardware accelerator for SABRes that removes all atomicity-associated software overheads. Compared to a state-of-the-art software atomicity mechanism, LightSABRes improve the throughput of a microbenchmark atomically accessing 128B-8KB objects from remote memory by 15-97%, and the throughput of a modern in-memory distributed object store by 30-60%
    corecore