25,065 research outputs found

    Planning the Unplanned Experiment: Towards Assessing the Efficacy of Standards for Safety-Critical Software

    Get PDF
    While software in industries such as aviation has a good safety record, little is known about whether standards for software in other safety-critical applications “work” — or even what that means. Safe use of software in safety-critical applications requires well-founded means of determining whether the software is fit for such use. It is often implicitly argued that software is fit for safety-critical use because it conforms to an appropriate standard. Without knowing whether a standard “works,” such reliance is an experiment and without carefully collecting assessment data, that experiment is unplanned. To help “plan” the experiment, we organized a workshop to develop practical ideas for assessing software safety standards. In this paper, we relate and elaborate on our workshop discussion, which revealed subtle, but important, study design considerations and practical barriers to collecting appropriate historical data and recruiting appropriate experimental subjects. We discuss assessing standards as written and as applied, several candidate definitions for what it means for a standard to “work,” and key assessment strategies and study techniques. Finally, we conclude with a discussion of the kinds of research that will be required and how academia, industry and regulators might collaborate to overcome these noted barriers

    Planning the Unplanned Experiment: Towards Assessing the Efficacy of Standards for Safety-Critical Software

    Get PDF
    Safe use of software in safety-critical applications requires well-founded means of determining whether software is fit for such use. While software in industries such as aviation has a good safety record, little is known about whether standards for software in safety-critical applications 'work' (or even what that means). It is often (implicitly) argued that software is fit for safety-critical use because it conforms to an appropriate standard. Without knowing whether a standard works, such reliance is an experiment; without carefully collecting assessment data, that experiment is unplanned. To help plan the experiment, we organized a workshop to develop practical ideas for assessing software safety standards. In this paper, we relate and elaborate on the workshop discussion, which revealed subtle but important study design considerations and practical barriers to collecting appropriate historical data and recruiting appropriate experimental subjects. We discuss assessing standards as written and as applied, several candidate definitions for what it means for a standard to 'work,' and key assessment strategies and study techniques and the pros and cons of each. Finally, we conclude with thoughts about the kinds of research that will be required and how academia, industry, and regulators might collaborate to overcome the noted barriers

    Planning the Unplanned Experiment: Assessing the Efficacy of Standards for Safety Critical Software

    Get PDF
    We need well-founded means of determining whether software is t for use in safety-critical applications. While software in industries such as aviation has an excellent safety record, the fact that software aws have contributed to deaths illustrates the need for justi ably high con dence in software. It is often argued that software is t for safety-critical use because it conforms to a standard for software in safety-critical systems. But little is known about whether such standards `work.' Reliance upon a standard without knowing whether it works is an experiment; without collecting data to assess the standard, this experiment is unplanned. This paper reports on a workshop intended to explore how standards could practicably be assessed. Planning the Unplanned Experiment: Assessing the Ecacy of Standards for Safety Critical Software (AESSCS) was held on 13 May 2014 in conjunction with the European Dependable Computing Conference (EDCC). We summarize and elaborate on the workshop's discussion of the topic, including both the presented positions and the dialogue that ensued

    Validation of Ultrahigh Dependability for Software-Based Systems

    Get PDF
    Modern society depends on computers for a number of critical tasks in which failure can have very high costs. As a consequence, high levels of dependability (reliability, safety, etc.) are required from such computers, including their software. Whenever a quantitative approach to risk is adopted, these requirements must be stated in quantitative terms, and a rigorous demonstration of their being attained is necessary. For software used in the most critical roles, such demonstrations are not usually supplied. The fact is that the dependability requirements often lie near the limit of the current state of the art, or beyond, in terms not only of the ability to satisfy them, but also, and more often, of the ability to demonstrate that they are satisfied in the individual operational products (validation). We discuss reasons why such demonstrations cannot usually be provided with the means available: reliability growth models, testing with stable reliability, structural dependability modelling, as well as more informal arguments based on good engineering practice. We state some rigorous arguments about the limits of what can be validated with each of such means. Combining evidence from these different sources would seem to raise the levels that can be validated; yet this improvement is not such as to solve the problem. It appears that engineering practice must take into account the fact that no solution exists, at present, for the validation of ultra-high dependability in systems relying on complex software

    Improving animal health on organic dairy farms: stakeholder views on policy options

    Get PDF
    Although ensuring good animal health is a stated aim of organic livestock farming and an important reason why consumers purchase organic products, the health states actually achieved are comparable to those in conventional farming. Unfortunately, there have been no studies to date that have assessed stakeholder views on different policy options for improving animal health on organic dairy farms. To address this deficit, stakeholder consultations were conducted in four European countries, involving 39 supply-chain stakeholders (farmers, advisors, veterinarians, inspectors, processors, and retailers). Stakeholders were encouraged to discuss different ways, including policy change, of improving organic health states. Acknowledging the need for further health improvements in organic dairy herds, stakeholders generally favoured establishing outcome-oriented animal health requirements as a way of achieving this. However, as a result of differing priorities for animal health improvement, there was disagreement on questions such as: who should be responsible for assessing animal health status on organic farms; and how to define and implement minimum health requirements. The results of the study suggest that future research must fully explore the opportunities and risks of different policy options and also suggest ways to overcome the divergence of stakeholders’ interests in public debates

    Level up learning: a national survey on teaching with digital games

    Get PDF
    Digital games have the potential to transform K-12 education as we know it. But what has been the real experience among teachers who use games in the classroom? In 2013, the Games and Learning Publishing Council conducted a national survey among nearly 700 K-8 teachers. The report reveals key findings from the survey, and looks at how often and why teachers use games in the classroom, as well as issues they encounter in their efforts to implement digital games into their practice

    Advances in Teaching & Learning Day Abstracts 2005

    Get PDF
    Proceedings of the Advances in Teaching & Learning Day Regional Conference held at The University of Texas Health Science Center at Houston in 2005
    • …
    corecore