6 research outputs found

    PIKA: A Network Service for Multikernel Operating Systems

    Get PDF
    PIKA is a network stack designed for multikernel operating systems that target potential future architectures lacking cache-coherent shared memory but supporting message passing. PIKA splits the network stack into several servers that communicate using a low-overhead message passing layer. A key challenge faced by PIKA is the maintenance of shared state, such as a single accept queue and load balance information. PIKA addresses this challenge using a speculative 3-way handshake for connection acceptance, and a new distributed load balancing scheme for spreading connections. A PIKA prototype achieves competitive performance, excellent scalability, and low service times under load imbalance on commodity hardware. Finally, we demonstrate that splitting network stack processing by function across separate cores is a net loss on commodity hardware, and we describe conditions under which it may be advantageous

    Building A Scalable And High-Performance Key-Value Store System

    Get PDF
    Contemporary web sites can store and process very large amounts of data. To provide timely service to their users, they have adopted key-value (KV) stores, which is a simple but effective caching infrastructure atop the conventional databases that store these data, to boost performance. Examples are Facebook, Twitter and Amazon. As yet little is known about the realistic workloads outside of the companies that operate them, this dissertation work provides a detailed workload study on Facebook\u27s Memcached, which is one of the world\u27s largest KV deployment. We analyze the Memcached workload from the perspective of server-side performance, request composition, caching efficacy, and key locality. The observations presented in this dissertation lead to several design insights and new research direction for KV stores - Hippos, a high-throughput, low-latency, and energy-efficient KV-store implementation. Long considered an application that is memory-bound and network-bound, re- cent KV-store implementations on multicore servers grow increasingly CPU-bound instead. This limitation often leads to under-utilization of available bandwidth and poor energy efficiency, as well as long response times under heavy load. To address these issues, Hippos moves the KV-store into the operating system\u27s kernel and thus removes most of the overhead associated with the network stack and system calls. It uses the Netfilter framework to quickly handle UDP packets, removing the overhead of UDP-based GET requests almost entirely. Combined with lock-free multithreaded data access, Hippos removes several performance bottlenecks both internal and external to the KV-store application. Hippos is prototyped as a Linux loadable kernel module and evaluated it against the ubiquitous Memcached using various micro-benchmarks and workloads from Face- book\u27s production systems. The experiments show that Hippos provides some 20- 200% throughput improvements on a 1Gbps network (up to 590% improvement on a 10Gbps network) and 5-20% saving of power compared with Memcached

    TRIIIAD: Uma Arquitetura para Orquestração Automônica de Redes de Data Center Centrado em Servidor.

    Get PDF
    sta tese apresenta duas contribuições para as redes de data center centrado em servidores. A primeira, intitulada Twin Datacenter Interconnection Topology, foca nos aspectos topológicos e demostra como o uso de Grafos Gêmeos podem potencialmente reduzir o custo e garantir alta escalabilidade, tolerância a falhas, resiliência e desempenho. A segunda, intitulada TRIIIAD TRIple-Layered Intelligent and Integrated Architecture for Datacenters, foca no acoplamento entre a orquestração da nuvem e o controle da rede. A TRIIIAD é composta por três camadas horizontais e um plano vertical de controle, gerência e orquestração. A camada superior representa a nuvem do data center. A camada intermediária fornece um mecanismo leve e eficiente para roteamento e encaminhamento dos dados. A camada inferior funciona como um comutador óptico distribuído. Finalmente, o plano vertical alinha o funcionamento das três camadas e as mantem agnósticas entre si. Este plano foi viabilizado por um controlador SDN aumentado, que se integrou à dinâmica da orquestração, de forma a manter a consistência entre as informações da rede e as decisões tomadas na camada de virtualizaçã
    corecore