11 research outputs found

    Approximating Quality-of-Service Tradeoffs for Personalized Web Services

    No full text
    This study applies recent work on the approximability of multiobjective optimization problems to Internet services that provide dynamic, personalized web content. Changing environmental conditions can prompt such services to make tradeoffs along quality-of-service axes such as response time, throughput, and the "completeness" of the data provided. We propose an approach that uses approximation techniques to generate an ε-approximate Pareto curve that captures the tradeoffs between QoS objectives. Site designers can then use this approximate Pareto curve to identify the configurations that "best" fit certain environmental situations. We explore..

    Fault Injection in glibc (FIG)

    No full text
    Most contemporary software applications presuppose an error-free system environment. While this assumption is often correct, the system can break down under exceptional situations, such as resource exhaustion or network failures. Unfortunately, too few developers deal with these issues, since they rarely arise in normal testing scenarios and are usually treated as "exceptional" cases to be handled outside the course of regular programming. We attempt to ameliorate this situation by producing a library (libfig) that induces errors at the application/operating system boundary in a controlled manner. Our intent is that this library be used as a tool for software development and verification, as well as for on-site testing of production systems. We use a working version of this tool to test the behavior of several common UNIX applications under simulated failures, and o#er suggestions on how to develop software that is more resilient to failures in the system environment

    Practical Issues in Dependability Benchmarking

    No full text
    Much of the work to date on dependability benchmarks has focused on costly, comprehensive measurements of whole-system dependability. But benchmarks should also be useful for developers and researchers to quickly evaluate incremental improvements to their systems. To address both audiences, we propose dividing the space of dependability benchmarks into two categories: competitive benchmarks that take the holistic approach, and less expensive developer benchmarks aimed at day-to-day development tasks. In this paper we differentiate the goals of these two types of benchmarks, discuss how each type might be appropriately realized, and propose simplifyin
    corecore