7 research outputs found

    Working draft submitted to CHI 2006 Robust Reputations for Peer-to-peer Marketplaces βˆ—

    No full text
    We have developed a suite of algorithms to address two large problems confronting reputation systems for large peer-topeer markets: data sparseness and inaccurate feedback. To handle sparse data, we propose a Bayesian version of the well-known Percent Positive Feedback system. To mitigate the effect of inaccurate feedback – particularly retaliatory negative feedback – we propose EM-trust, which uses a latent variable statistical model of the feedback process. Using a marketplace simulator, we demonstrate that both of these algorithms provide more accurate reputations than standard Percent Positive Feedback. Finally, we show that even better performance can be obtained by combining the two approaches into a Bayesian EM-trust reputation system

    Fault Injection in glibc (FIG)

    No full text
    Most contemporary software applications presuppose an error-free system environment. While this assumption is often correct, the system can break down under exceptional situations, such as resource exhaustion or network failures. Unfortunately, too few developers deal with these issues, since they rarely arise in normal testing scenarios and are usually treated as "exceptional" cases to be handled outside the course of regular programming. We attempt to ameliorate this situation by producing a library (libfig) that induces errors at the application/operating system boundary in a controlled manner. Our intent is that this library be used as a tool for software development and verification, as well as for on-site testing of production systems. We use a working version of this tool to test the behavior of several common UNIX applications under simulated failures, and o#er suggestions on how to develop software that is more resilient to failures in the system environment

    Practical Issues in Dependability Benchmarking

    No full text
    Much of the work to date on dependability benchmarks has focused on costly, comprehensive measurements of whole-system dependability. But benchmarks should also be useful for developers and researchers to quickly evaluate incremental improvements to their systems. To address both audiences, we propose dividing the space of dependability benchmarks into two categories: competitive benchmarks that take the holistic approach, and less expensive developer benchmarks aimed at day-to-day development tasks. In this paper we differentiate the goals of these two types of benchmarks, discuss how each type might be appropriately realized, and propose simplifyin
    corecore