4 research outputs found

    Achieving high central processing unit efficiency and low tail latency in datacenters

    No full text
    Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019Cataloged from PDF version of thesis.Includes bibliographical references (pages 95-104).As datacenters have proliferated over the last couple of decades and datacenter applications have grown increasingly complex, two competing goals have emerged for networks and servers in datacenters. On the one hand, applications demand low latency-on the order of microseconds-in order to respond quickly to user requests. On the other hand, datacenter operators require high CPU efficiency in order to reduce operating costs. Unfortunately, today's systems do a poor job of providing low latency and high CPU efficiency simultaneously. This dissertation presents Shenango, a system that improves CPU efficiency while preserving or improving tail latency relative to the state-of-the-art. Shenango establishes that systems today are unable to provide CPU efficiency and low latency simultaneously because they reallocate cores across applications too infrequently. It contributes an efficient algorithm for deciding when applications would benefit from additional cores as well as mechanisms to reallocate cores at microsecond granularity. Shenango's fast core reallocations enable it to match the tail latency of state-of-the-art kernel bypass network stacks while linearly trading throughput for latency-sensitive applications for throughput for batch applications as load varies over time. While Shenango enables high efficiency and low tail latency at endhosts, end-to-end application performance also depends on the behavior of the network. Thus this dissertation also describes Chimera, a proposal for how to build on Shenango to co-design congestion control with CPU scheduling, so that congestion control can optimize for end-to-end latency and efficiency.by Amy Ousterhout.Ph. D.Ph.D. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Scienc

    Programmable data plane for resource management in datacenters

    No full text
    Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.Cataloged from PDF version of thesis.Includes bibliographical references (pages 47-51).Network resource management schemes can significantly improve the performance of datacenter applications. However, it is difficult to experiment with and evaluate these schemes today because they require modifications to hardware routers. To address this we introduce Flexplane, a programmable network data plane for datacenters. Flexplane enables users to express their schemes in a high-level language (C++) and then run real datacenter applications over them at hardware rates. We demonstrate that Flexplane can accurately reproduce the behavior of schemes already supported in hardware (e.g. RED, DCTCP) and can be used to experiment with new schemes not yet supported in hardware, such as HULL. We also show that Flexplane is scalable and has the potential to support large networks.by Amy Ousterhout.S.M

    Just in time delivery: Leveraging operating systems knowledge for better datacenter congestion control

    No full text
    Network links and server CPUs are heavily contended resources in modern datacenters. To keep tail latencies low, datacenter operators drastically overprovision both types of resources today, and there has been significant research into effectively managing network traffic [4, 19, 21, 29] and CPU load [22, 27, 32]. However, this work typically looks at the two resources in isolation. In this paper, we make the observation that, in the datacenter, the allocation of network and CPU resources should be co-designed for the most efficiency and the best response times. For example, while congestion control protocols can prioritize traffic from certain flows, this provides no benefit if the traffic arrives at an overloaded server that will only queue the request. This paper explores the potential benefits of such a co-designed resource allocator and considers the recent work in both CPU scheduling and congestion control that is best suited to such a system. We propose a Chimera, a new datacenter OS that integrates a receiver-based congestion control protocol with OS insight into application queues, using the recent Shenango operating system [32]

    Shenango: Achieving high CPU efficiency for latency-sensitive datacenter workloads

    No full text
    Datacenter applications demand microsecond-scale tail latencies and high request rates from operating systems, and most applications handle loads that have high variance over multiple timescales. Achieving these goals in a CPU-efficient way is an open problem. Because of the high overheads of today's kernels, the best available solution to achieve microsecond-scale latencies is kernel-bypass networking, which dedicates CPU cores to applications for spin-polling the network card. But this approach wastes CPU: even at modest average loads, one must dedicate enough cores for the peak expected load. Shenango achieves comparable latencies but at far greater CPU efficiency. It reallocates cores across applications at very fine granularity-every 5 µs-enabling cycles unused by latency-sensitive applications to be used productively by batch processing applications. It achieves such fast reallocation rates with (1) an efficient algorithm that detects when applications would benefit from more cores, and (2) a privileged component called the IOKernel that runs on a dedicated core, steering packets from the NIC and orchestrating core reallocations. When handling latency-sensitive applications, such as memcached, we found that Shenango achieves tail latency and throughput comparable to ZygOS, a state-of-the-art, kernel-bypass network stack, but can linearly trade latency-sensitive application throughput for batch processing application throughput, vastly increasing CPU efficiency.NSF (Grants CNS-1407470, CNS-1526791, CNS-1563826
    corecore