131 research outputs found

    EbbRT: Elastic Building Block Runtime - overview

    Full text link
    EbbRT provides a lightweight runtime that enables the construction of reusable, low-level system software which can integrate with existing, general purpose systems. It achieves this by providing a library that can be linked into a process on an existing OS, and as a small library OS that can be booted directly on an IaaS node

    EbbRT: a customizable operating system for cloud applications

    Full text link
    Efficient use of hardware requires operating system components be customized to the application workload. Our general purpose operating systems are ill-suited for this task. We present Genesis, a new operating system that enables per-application customizations for cloud applications. Genesis achieves this through a novel heterogeneous distributed structure, a partitioned object model, and an event-driven execution environment. This paper describes the design and prototype implementation of Genesis, and evaluates its ability to improve the performance of common cloud applications. The evaluation of the Genesis prototype demonstrates memcached, run within a VM, can outperform memcached run on an unvirtualized Linux. The prototype evaluation also demonstrates an 14% performance improvement of a V8 JavaScript engine benchmark, and a node.js webserver that achieves a 50% reduction in 99th percentile latency compared to it run on Linux

    Programmable Smart NIC

    Get PDF

    Customization and reuse in datacenter operating systems

    Get PDF
    Increasingly, computing has moved to large-scale datacenters where application performance is critical. Stagnating CPU clock speeds coupled with increasingly higher bandwidth and lower latency networking and storage puts an increased focus on the operating system to enable high-performance. The challenge of providing high-performance is made more difficult due to the diversity of datacenter workloads such as search, video processing, distributed storage, and machine learning tasks. Our existing general purpose operating systems must sacrifice the performance of any one application in order to support a broad set of applications. We observe that a common model for application deployment is to dedicate a physical or virtual machine to a single application. In this context, our operating systems can be specialized to the purposes of the application. In this dissertation, we explore the design of the Elastic Building Block Runtime (EbbRT), a framework for constructing high-performance, customizable operating systems while keeping developer effort low. EbbRT adopts a lightweight execution environment which enables applications to directly manage hardware resources and specialize their system behavior. An EbbRT operating system is composed of objects called Elastic Building Blocks (Ebbs) which encapsulate functionality so it can be incrementally extended or optimized. Finally, EbbRT adopts a unique heterogeneous and distributed architecture where an application can be split between a server running an existing general purpose operating system and a server running a customized library operating system. The library operating system provides the mechanisms for application execution including primitives for event driven programming, componentization, memory management and I/O. We demonstrate that EbbRT enables memcached, an in-memory caching server, to achieve more than double the performance with EbbRT than with Linux. We also demonstrate that EbbRT can support more full-featured applications such as a port of Google’s V8 javascript engine and nodejs, a javascript server runtime

    Techniques for Processing TCP/IP Flow Content in Network Switches at Gigabit Line Rates

    Get PDF
    The growth of the Internet has enabled it to become a critical component used by businesses, governments and individuals. While most of the traffic on the Internet is legitimate, a proportion of the traffic includes worms, computer viruses, network intrusions, computer espionage, security breaches and illegal behavior. This rogue traffic causes computer and network outages, reduces network throughput, and costs governments and companies billions of dollars each year. This dissertation investigates the problems associated with TCP stream processing in high-speed networks. It describes an architecture that simplifies the processing of TCP data streams in these environments and presents a hardware circuit capable of TCP stream processing on multi-gigabit networks for millions of simultaneous network connections. Live Internet traffic is analyzed using this new TCP processing circuit

    Hyperscsi : Design and development of a new protocol for storage networking

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Data Movement Challenges and Solutions with Software Defined Networking

    Get PDF
    With the recent rise in cloud computing, applications are routinely accessing and interacting with data on remote resources. Interaction with such remote resources for the operation of media-rich applications in mobile environments is also on the rise. As a result, the performance of the underlying network infrastructure can have a significant impact on the quality of service experienced by the user. Despite receiving significant attention from both academia and industry, computer networks still face a number of challenges. Users oftentimes report and complain about poor experiences with their devices and applications, which can oftentimes be attributed to network performance when downloading or uploading application data. This dissertation investigates problems that arise with data movement across computer networks and proposes novel solutions to address these issues through software defined networking (SDN). SDN is lauded to be the paradigm of choice for next generation networks. While academia explores use cases in various contexts, industry has focused on data center and wide area networks. There is a significant range of complex and application-specific network services that can potentially benefit from SDN, but introduction and adoption of such solutions remains slow in production networks. One impeding factor is the lack of a simple yet expressive enough framework applicable to all SDN services across production network domains. Without a uniform framework, SDN developers create disjoint solutions, resulting in untenable management and maintenance overhead. The SDN-based solutions developed in this dissertation make use of a common agent-based approach. The architecture facilitates application-oriented SDN design with an abstraction composed of software agents on top of the underlying network. There are three key components modern and future networks require to deliver exceptional data transfer performance to the end user: (1) user and application mobility, (2) high throughput data transfer, and (3) efficient and scalable content distribution. Meeting these key components will not only ensure the network can provide robust and reliable end-to-end connectivity, but also that network resources will be used efficiently. First, mobility support is critical for user applications to maintain connectivity to remote, cloud-based resources. Today\u27s network users are frequently accessing such resources while on the go, transitioning from network to network with the expectation that their applications will continue to operate seamlessly. As users perform handovers between heterogeneous networks or between networks across administrative domains, the application becomes responsible for maintaining or establishing new connections to remote resources. Although application developers often account for such handovers, the result is oftentimes visible to the user through diminished quality of service (e.g. rebuffering in video streaming applications). Many intra-domain handover solutions exist for handovers in WiFi and cellular networks, such as mobile IP, but they are architecturally complex and have not been integrated to form a scalable, inter-domain solution. A scalable framework is proposed that leverages SDN features to implement both horizontal and vertical handovers for heterogeneous wireless networks within and across administrative domains. User devices can select an appropriate network using an on-board virtual SDN implementation that manages available network interfaces. An SDN-based counterpart operates in the network core and edge to handle user migrations as they transition from one edge attachment point to another. The framework was developed and deployed as an extension to the Global Environment for Network Innovations (GENI) testbed; however, the framework can be deployed on any OpenFlow enabled network. Evaluation revealed users can maintain existing application connections without breaking the sockets and requiring the application to recover. Second, high throughput data transfer is essential for user applications to acquire large remote data sets. As data sizes become increasingly large, often combined with their locations being far from the applications, the well known impact of lower Transmission Control Protocol (TCP) throughput over large delay-bandwidth product paths becomes more significant to these applications. While myriads of solutions exist to alleviate the problem, they require specialized software and/or network stacks at both the application host and the remote data server, making it hard to scale up to a large range of applications and execution environments. This results in high throughput data transfer that is available to only a select subset of network users who have access to such specialized software. An SDN based solution called Steroid OpenFlow Service (SOS) has been proposed as a network service that transparently increases the throughput of TCP-based data transfers across large networks. SOS shifts the complexity of high performance data transfer from the end user to the network; users do not need to configure anything on the client and server machines participating in the data transfer. The SOS architecture supports seamless high performance data transfer at scale for multiple users and for high bandwidth connections. Emphasis is placed on the use of SOS as a part of a larger, richer data transfer ecosystem, complementing and compounding the efforts of existing data transfer solutions. Non-TCP-based solutions, such as Aspera, can operate seamlessly alongside an SOS deployment, while those based on TCP, such as wget, curl, and GridFTP, can leverage SOS for throughput improvement beyond what a single TCP connection can provide. Through extensive evaluation in real-world environments, the SOS architecture is proven to be flexibly deployable on a variety of network architectures, from cloud-based, to production networks, to scaled up, high performance data center environments. Evaluation showed that the SOS architecture scales linearly through the addition of SOS “agents†to the SOS deployment, providing data transfer performance improvement to multiple users simultaneously. An individual data transfer enhanced by SOS was shown to have increased throughput nearly forty times the same data transfer without SOS assistance. Third, efficient and scalable video content distribution is imperative as the demand for multimedia content over the Internet increases. Current state of the art solutions consist of vast content distribution networks (CDNs) where content is oftentimes hosted in duplicate at various geographically distributed locations. Although CDNs are useful for the dissemination of static content, they do not provide a clear and scalable model for the on demand production and distribution of live, streaming content. IP multicast is a popular solution to scalable video content distribution; however, it is seldom used due to deployment and operational complexity. Inspired from the distributed design of todays CDNs and the distribution trees used by IP multicast, a SDN based framework called GENI Cinema (GC) is proposed to allow for the distribution of live video content at scale. GC allows for the efficient management and distribution of live video content at scale without the added architectural complexity and inefficiencies inherent to contemporary solutions such as IP multicast. GC has been deployed as an experimental, nation-wide live video distribution service using the GENI network, broadcasting live and prerecorded video streams from conferences for remote attendees, from the classroom for distance education, and for live sporting events. GC clients can easily and efficiently switch back and forth between video streams with improved switching latency latency over cable, satellite, and other live video providers. The real world dep loyments and evaluation of the proposed solutions show how SDN can be used as a novel way to solve current data transfer problems across computer networks. In addition, this dissertation is expected to provide guidance for designing, deploying, and debugging SDN-based applications across a variety of network topologies

    InSight2: An Interactive Web Based Platform for Modeling and Analysis of Large Scale Argus Network Flow Data

    Get PDF
    Monitoring systems are paramount to the proactive detection and mitigation of problems in computer networks related to performance and security. Degraded performance and compromised end-nodes can cost computer networks downtime, data loss and reputation. InSight2 is a platform that models, analyzes and visualizes large scale Argus network flow data using up-to-date geographical data, organizational information, and emerging threats. It is engineered to meet the needs of network administrators with flexibility and modularity in mind. Scalability is ensured by devising multi-core processing by implementing robust software architecture. Extendibility is achieved by enabling the end user to enrich flow records using additional user provided databases. Deployment is streamlined by providing an automated installation script. State-of-the-art visualizations are devised and presented in a secure, user friendly web interface giving greater insight about the network to the end user

    New Architectures and Mechanisms for the Network Subsystem in Virtualized Servers

    Get PDF
    Machine virtualization has become a cornerstone of modern datacenters. It enables server consolidation as a means to reduce costs and increase efficiencies. The communication endpoints within the datacenter are now virtual machines (VMs), not physical servers. Consequently, the datacenter network now extends into the server and last hop switching occurs inside the server. Today, thanks to increasing core counts on processors, server VM densities are on the rise. This trend is placing enormous pressure on the network I/O subsystem and the last hop virtual switch to support efficient communication, both internal and external to the server. But the current state-of-the-art solutions fall short of these requirements. This thesis presents new architectures and mechanisms for the network subsystem in virtualized servers to build efficient virtualization platforms. Specifically, there are three primary contributions in this thesis. First, it presents a new mechanism to reduce memory sharing overheads in driver domain-based I/O architectures. The key idea is to enable a guest operating system to reuse its I/O buffers that are shared with a driver domain. Second, it describes Hyper-Switch, a highly streamlined, efficient, and scalable software-based virtual switching architecture, specifically for hypervisors that support driver domains. The Hyper-Switch combines the best of the existing architectures by hosting the device drivers in a driver domain to isolate any faults and placing the virtual switch in the hypervisor to perform efficient packet switching. Further, the Hyper-Switch implements several optimizations, such as virtual machine state-aware batching, preemptive copying, and dynamic offloading of packet processing to idle CPU cores, to enable efficient packet processing, better utilization of the available CPU resources, and higher concurrency. This architecture eliminates the memory sharing overheads associated with driver domains. Third, this thesis proposes an alternate virtual switching architecture, called sNICh, which explores the idea of server/switch integration. The sNICh is a combined network interface card (NIC) and datacenter switching accelerator. This takes the Hyper-Switch architecture one step further. It offloads the data plane of the switch to the network device, eliminating driver domains entirely

    EbbRT: a framework for building per-application library operating systems

    Full text link
    General purpose operating systems sacrifice per-application performance in order to preserve generality. On the other hand, substantial effort is required to customize or construct an operating system to meet the needs of an application. This paper describes the design and implementation of the Elastic Building Block Runtime (EbbRT), a framework for building per-application library operating systems. EbbRT reduces the effort required to construct and maintain library operating systems without hindering the degree of specialization required for high performance. We combine several techniques in order to achieve this, including a distributed OS architecture, a low-overhead component model, a lightweight event-driven runtime, and many language level primitives. EbbRT is able to simultaneously enable performance specialization, support for a broad range of applications, and ease the burden of systems development. An EbbRT prototype demonstrates the degree of customization made possible by our framework approach. In an evaluation of memcached, EbbRT and is able to attain 2:08  higher throughput than Linux. The node.js runtime, ported to EbbRT, demonstrates the broad applicability and ease of development enabled by our approachPublished versio
    corecore