4 research outputs found

    Evaluating the Assessment of Software Fault-Freeness

    Get PDF
    We propose to validate experimentally a theory of software certification that proceeds from assessment of confidence in fault-freeness (due to standards) to conservative prediction of failure-free operation

    Planning the Unplanned Experiment: Towards Assessing the Efficacy of Standards for Safety-Critical Software

    Get PDF
    While software in industries such as aviation has a good safety record, little is known about whether standards for software in other safety-critical applications “work” — or even what that means. Safe use of software in safety-critical applications requires well-founded means of determining whether the software is fit for such use. It is often implicitly argued that software is fit for safety-critical use because it conforms to an appropriate standard. Without knowing whether a standard “works,” such reliance is an experiment and without carefully collecting assessment data, that experiment is unplanned. To help “plan” the experiment, we organized a workshop to develop practical ideas for assessing software safety standards. In this paper, we relate and elaborate on our workshop discussion, which revealed subtle, but important, study design considerations and practical barriers to collecting appropriate historical data and recruiting appropriate experimental subjects. We discuss assessing standards as written and as applied, several candidate definitions for what it means for a standard to “work,” and key assessment strategies and study techniques. Finally, we conclude with a discussion of the kinds of research that will be required and how academia, industry and regulators might collaborate to overcome these noted barriers

    Planning the Unplanned Experiment: Towards Assessing the Efficacy of Standards for Safety-Critical Software

    Get PDF
    Safe use of software in safety-critical applications requires well-founded means of determining whether software is fit for such use. While software in industries such as aviation has a good safety record, little is known about whether standards for software in safety-critical applications 'work' (or even what that means). It is often (implicitly) argued that software is fit for safety-critical use because it conforms to an appropriate standard. Without knowing whether a standard works, such reliance is an experiment; without carefully collecting assessment data, that experiment is unplanned. To help plan the experiment, we organized a workshop to develop practical ideas for assessing software safety standards. In this paper, we relate and elaborate on the workshop discussion, which revealed subtle but important study design considerations and practical barriers to collecting appropriate historical data and recruiting appropriate experimental subjects. We discuss assessing standards as written and as applied, several candidate definitions for what it means for a standard to 'work,' and key assessment strategies and study techniques and the pros and cons of each. Finally, we conclude with thoughts about the kinds of research that will be required and how academia, industry, and regulators might collaborate to overcome the noted barriers
    corecore