2,848 research outputs found

    The Case for a Single System Image for Personal Devices

    Full text link
    Computing technology has gotten cheaper and more powerful, allowing users to have a growing number of personal computing devices at their disposal. While this trend is beneficial for the user, it also creates a growing management burden for the user. Each device must be managed independently and users must repeat the same management tasks on the each device, such as updating software, changing configurations, backup, and replicating data for availability. To prevent the management burden from increasing with the number of devices, we propose that all devices run a single system image called a personal computing image. Personal computing images export a device-specific user interface on each device, but provide a consistent view of application and operating state across all devices. As a result, management tasks can be performed once on any device and will be automatically propagated to all other devices belonging to the user. We discuss evolutionary steps that can be taken to achieve personal computing images for devices and elaborate on challenges that we believe building such systems will face

    A Double-Edged Sword: Security Threats and Opportunities in One-Sided Network Communication

    Full text link
    One-sided network communication technologies such as RDMA and NVMe-over-Fabrics are quickly gaining adoption in production software and in datacenters. Although appealing for their low CPU utilization and good performance, they raise new security concerns that could seriously undermine datacenter software systems building on top of them. At the same time, they offer unique opportunities to help enhance security. Indeed, one-sided network communication is a double-edged sword in security. This paper presents our insights into security implications and opportunities of one-sided communication

    High Velocity Kernel File Systems with Bento

    Full text link
    High development velocity is critical for modern systems. This is especially true for Linux file systems which are seeing increased pressure from new storage devices and new demands on storage systems. However, high velocity Linux kernel development is challenging due to the ease of introducing bugs, the difficulty of testing and debugging, and the lack of support for redeployment without service disruption. Existing approaches to high-velocity development of file systems for Linux have major downsides, such as the high performance penalty for FUSE file systems, slowing the deployment cycle for new file system functionality. We propose Bento, a framework for high velocity development of Linux kernel file systems. It enables file systems written in safe Rust to be installed in the Linux kernel, with errors largely sandboxed to the file system. Bento file systems can be replaced with no disruption to running applications, allowing daily or weekly upgrades in a cloud server setting. Bento also supports userspace debugging. We implement a simple file system using Bento and show that it performs similarly to VFS-native ext4 on a variety of benchmarks and outperforms a FUSE version by 7x on 'git clone'. We also show that we can dynamically add file provenance tracking to a running kernel file system with only 15ms of service interruption.Comment: 14 pages, 6 figures, to be published in FAST 202

    Toward a Principled Framework for Benchmarking Consistency

    Full text link
    Large-scale key-value storage systems sacrifice consistency in the interest of dependability (i.e., partition tolerance and availability), as well as performance (i.e., latency). Such systems provide eventual consistency,which---to this point---has been difficult to quantify in real systems. Given the many implementations and deployments of eventually-consistent systems (e.g., NoSQL systems), attempts have been made to measure this consistency empirically, but they suffer from important drawbacks. For example, state-of-the art consistency benchmarks exercise the system only in restricted ways and disrupt the workload, which limits their accuracy. In this paper, we take the position that a consistency benchmark should paint a comprehensive picture of the relationship between the storage system under consideration, the workload, the pattern of failures, and the consistency observed by clients. To illustrate our point, we first survey prior efforts to quantify eventual consistency. We then present a benchmarking technique that overcomes the shortcomings of existing techniques to measure the consistency observed by clients as they execute the workload under consideration. This method is versatile and minimally disruptive to the system under test. As a proof of concept, we demonstrate this tool on Cassandra

    DoubleTake: Fast and Precise Error Detection via Evidence-Based Dynamic Analysis

    Full text link
    This paper presents evidence-based dynamic analysis, an approach that enables lightweight analyses--under 5% overhead for these bugs--making it practical for the first time to perform these analyses in deployed settings. The key insight of evidence-based dynamic analysis is that for a class of errors, it is possible to ensure that evidence that they happened at some point in the past remains for later detection. Evidence-based dynamic analysis allows execution to proceed at nearly full speed until the end of an epoch (e.g., a heavyweight system call). It then examines program state to check for evidence that an error occurred at some time during that epoch. If so, it rolls back execution and re-executes the code with instrumentation activated to pinpoint the error. We present DoubleTake, a prototype evidence-based dynamic analysis framework. DoubleTake is practical and easy to deploy, requiring neither custom hardware, compiler, nor operating system support. We demonstrate DoubleTake's generality and efficiency by building dynamic analyses that find buffer overflows, memory use-after-free errors, and memory leaks. Our evaluation shows that DoubleTake is efficient, imposing just 4% overhead on average, making it the fastest such system to date. It is also precise: DoubleTake pinpoints the location of these errors to the exact line and memory addresses where they occur, providing valuable debugging information to programmers.Comment: Pre-print, accepted to appear at ICSE 201

    EbbRT: Elastic Building Block Runtime - overview

    Full text link
    EbbRT provides a lightweight runtime that enables the construction of reusable, low-level system software which can integrate with existing, general purpose systems. It achieves this by providing a library that can be linked into a process on an existing OS, and as a small library OS that can be booted directly on an IaaS node

    Making data center computations fast, but not so furious

    Full text link
    We propose an aggressive computational sprinting variant for data center environments. While most of previous work on computational sprinting focuses on maximizing the sprinting process while ensuring non-faulty conditions, we take advantage of the existing replication in data centers to push the system beyond its safety limits. In this paper we outline this vision, we survey existing techniques for achieving it, and we present some design ideas for future work in this area.Comment: The 7th Workshop on Multi-core and Rack Scale Systems - MARS'1

    Configuration Testing: Testing Configuration Values as Code and with Code

    Full text link
    This paper proposes configuration testing--evaluating configuration values (to be deployed) by exercising the code that uses the values and assessing the corresponding program behavior. We advocate that configuration values should be systematically tested like software code and that configuration testing should be a key reliability engineering practice for preventing misconfigurations from production deployment. The essential advantage of configuration testing is to put the configuration values (to be deployed) in the context of the target software program under test. In this way, the dynamic effects of configuration values and the impact of configuration changes can be observed during testing. Configuration testing overcomes the fundamental limitations of de facto approaches to combatting misconfigurations, namely configuration validation and software testing--the former is disconnected from code logic and semantics, while the latter can hardly cover all possible configuration values and their combinations. Our preliminary results show the effectiveness of configuration testing in capturing real-world misconfigurations. We present the principles of writing new configuration tests and the promises of retrofitting existing software tests to be configuration tests. We discuss new adequacy and quality metrics for configuration testing. We also explore regression testing techniques to enable incremental configuration testing during continuous integration and deployment in modern software systems

    T-Visor: A Hypervisor for Mixed Criticality Embedded Real-time System with Hardware Virtualization Support

    Full text link
    Recently, embedded systems have not only requirements for hard real-time behavior and reliability, but also diversified functional demands, such as network functions. To satisfy these requirements, virtualization using hypervisors is promising for embedded systems. However, as most of existing hypervisors are designed for general-purpose information processing systems, they rely on large system stacks, so that they are not suitable for mixed criticality embedded real-time systems. Even in hypervisors designed for embedded systems, their schedulers do not consider the diversity of real-time requirements and rapid change in scheduling theory. We present the design and implementation of T-Visor, a hypervisor specialized for mixed criticality embedded real-time systems. T-Visor supports ARM architecture and realizes full virtualization using ARM Virtualization Extensions. To guarantee real-time behavior, T-Visor provides a flexible scheduling framework so that developers can select the most suitable scheduling algorithm for their systems. Our evaluation showed that it performed better compared to Xen/ARM. From these results, we conclude that our design and implementation are more suitable for embedded real-time systems than the existing hypervisors

    Efficient System-Enforced Deterministic Parallelism

    Full text link
    Deterministic execution offers many benefits for debugging, fault tolerance, and security. Running parallel programs deterministically is usually difficult and costly, however - especially if we desire system-enforced determinism, ensuring precise repeatability of arbitrarily buggy or malicious software. Determinator is a novel operating system that enforces determinism on both multithreaded and multi-process computations. Determinator's kernel provides only single-threaded, "shared-nothing" address spaces interacting via deterministic synchronization. An untrusted user-level runtime uses distributed computing techniques to emulate familiar abstractions such as Unix processes, file systems, and shared memory multithreading. The system runs parallel applications deterministically both on multicore PCs and across nodes in a cluster. Coarse-grained parallel benchmarks perform and scale comparably to - sometimes better than - conventional systems, though determinism is costly for fine-grained parallel applications.Comment: 14 pages, 12 figures, 3 table
    • …
    corecore