4,147 research outputs found

    Datacenter Traffic Control: Understanding Techniques and Trade-offs

    Get PDF
    Datacenters provide cost-effective and flexible access to scalable compute and storage resources necessary for today's cloud computing needs. A typical datacenter is made up of thousands of servers connected with a large network and usually managed by one operator. To provide quality access to the variety of applications and services hosted on datacenters and maximize performance, it deems necessary to use datacenter networks effectively and efficiently. Datacenter traffic is often a mix of several classes with different priorities and requirements. This includes user-generated interactive traffic, traffic with deadlines, and long-running traffic. To this end, custom transport protocols and traffic management techniques have been developed to improve datacenter network performance. In this tutorial paper, we review the general architecture of datacenter networks, various topologies proposed for them, their traffic properties, general traffic control challenges in datacenters and general traffic control objectives. The purpose of this paper is to bring out the important characteristics of traffic control in datacenters and not to survey all existing solutions (as it is virtually impossible due to massive body of existing research). We hope to provide readers with a wide range of options and factors while considering a variety of traffic control mechanisms. We discuss various characteristics of datacenter traffic control including management schemes, transmission control, traffic shaping, prioritization, load balancing, multipathing, and traffic scheduling. Next, we point to several open challenges as well as new and interesting networking paradigms. At the end of this paper, we briefly review inter-datacenter networks that connect geographically dispersed datacenters which have been receiving increasing attention recently and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial

    Implications and Limitations of Securing an InfiniBand Network

    Get PDF
    The InfiniBand Architecture is one of the leading network interconnects used in high performance computing, delivering very high bandwidth and low latency. As the popularity of InfiniBand increases, the possibility for new InfiniBand applications arise outside the domain of high performance computing, thereby creating the opportunity for new security risks. In this work, new security questions are considered and addressed. The study demonstrates that many common traffic analyzing tools cannot monitor or capture InfiniBand traffic transmitted between two hosts. Due to the kernel bypass nature of InfiniBand, many host-based network security systems cannot be executed on InfiniBand applications. Those that can impose a significant performance loss for the network. The research concludes that not all network security practices used for Ethernet translate to InfiniBand as previously suggested and that an answer to meeting specific security requirements for an InfiniBand network might reside in hardware offload

    Network Neutrality or Internet Innovation?

    Get PDF
    Over the past two decades, the Internet has undergone an extensive re-ordering of its topology that has resulted in increased variation in the price and quality of its services. Innovations such as private peering, multihoming, secondary peering, server farms, and content delivery networks have caused the Internet’s traditionally hierarchical architecture to be replaced by one that is more heterogeneous. Relatedly, network providers have begun to employ an increasingly varied array of business arrangements and pricing. This variation has been interpreted by some as network providers attempting to promote their self interest at the expense of the public. In fact, these changes reflect network providers’ attempts to reduce cost, manage congestion, and maintain quality of service. Current policy proposals to constrain this variation risk harming these beneficial developments.

    Master of Science

    Get PDF
    thesisEfficient movement of massive amounts of data over high-speed networks at high throughput is essential for a modern-day in-memory storage system. In response to the growing needs of throughput and latency demands at scale, a new class of database systems was developed in recent years. The development of these systems was guided by increased access to high throughput, low latency network fabrics, and declining cost of Dynamic Random Access Memory (DRAM). These systems were designed with On-Line Transactional Processing (OLTP) workloads in mind, and, as a result, are optimized for fast dispatch and perform well under small request-response scenarios. However, massive server responses such as those for range queries and data migration for load balancing poses challenges for this design. This thesis analyzes the effects of large transfers on scale-out systems through the lens of a modern Network Interface Card (NIC). The present-day NIC offers new and exciting opportunities and challenges for large transfers, but using them efficiently requires smart data layout and concurrency control. We evaluated the impact of modern NICs in designing data layout by measuring transmit performance and full system impact by observing the effects of Direct Memory Access (DMA), Remote Direct Memory Access (RDMA), and caching improvements such as Intel® Data Direct I/O (DDIO). We discovered that use of techniques such as Zero Copy yield around 25% savings in CPU cycles and a 50% reduction in the memory bandwidth utilization on a server by using a client-assisted design with records that are not updated in place. We also set up experiments that underlined the bottlenecks in the current approach to data migration in RAMCloud and propose guidelines for a fast and efficient migration protocol for RAMCloud

    FAIR: Forwarding Accountability for Internet Reputability

    Full text link
    This paper presents FAIR, a forwarding accountability mechanism that incentivizes ISPs to apply stricter security policies to their customers. The Autonomous System (AS) of the receiver specifies a traffic profile that the sender AS must adhere to. Transit ASes on the path mark packets. In case of traffic profile violations, the marked packets are used as a proof of misbehavior. FAIR introduces low bandwidth overhead and requires no per-packet and no per-flow state for forwarding. We describe integration with IP and demonstrate a software switch running on commodity hardware that can switch packets at a line rate of 120 Gbps, and can forward 140M minimum-sized packets per second, limited by the hardware I/O subsystem. Moreover, this paper proposes a "suspicious bit" for packet headers - an application that builds on top of FAIR's proofs of misbehavior and flags packets to warn other entities in the network.Comment: 16 pages, 12 figure

    CyberGuarder: a virtualization security assurance architecture for green cloud computing

    Get PDF
    Cloud Computing, Green Computing, Virtualization, Virtual Security Appliance, Security Isolation
    • …
    corecore