13,943 research outputs found

    Software timing analysis for complex hardware with survivability and risk analysis

    Get PDF
    The increasing automation of safety-critical real-time systems, such as those in cars and planes, leads, to more complex and performance-demanding on-board software and the subsequent adoption of multicores and accelerators. This causes software's execution time dispersion to increase due to variable-latency resources such as caches, NoCs, advanced memory controllers and the like. Statistical analysis has been proposed to model the Worst-Case Execution Time (WCET) of software running such complex systems by providing reliable probabilistic WCET (pWCET) estimates. However, statistical models used so far, which are based on risk analysis, are overly pessimistic by construction. In this paper we prove that statistical survivability and risk analyses are equivalent in terms of tail analysis and, building upon survivability analysis theory, we show that Weibull tail models can be used to estimate pWCET distributions reliably and tightly. In particular, our methodology proves the correctness-by-construction of the approach, and our evaluation provides evidence about the tightness of the pWCET estimates obtained, which allow decreasing them reliably by 40% for a railway case study w.r.t. state-of-the-art exponential tails.This work is a collaboration between Argonne National Laboratory and the Barcelona Supercomputing Center within the Joint Laboratory for Extreme-Scale Computing. This research is supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, under contract number DE-AC02- 06CH11357, program manager Laura Biven, and by the Spanish Government (SEV2015-0493), by the Spanish Ministry of Science and Innovation (contract TIN2015-65316-P), by Generalitat de Catalunya (contract 2014-SGR-1051).Peer ReviewedPostprint (author's final draft

    Acceptance Criteria for Critical Software Based on Testability Estimates and Test Results

    Get PDF
    Testability is defined as the probability that a program will fail a test, conditional on the program containing some fault. In this paper, we show that statements about the testability of a program can be more simply described in terms of assumptions on the probability distribution of the failure intensity of the program. We can thus state general acceptance conditions in clear mathematical terms using Bayesian inference. We develop two scenarios, one for software for which the reliability requirements are that the software must be completely fault-free, and another for requirements stated as an upper bound on the acceptable failure probability

    Introducing the STAMP method in road tunnel safety assessment

    Get PDF
    After the tremendous accidents in European road tunnels over the past decade, many risk assessment methods have been proposed worldwide, most of them based on Quantitative Risk Assessment (QRA). Although QRAs are helpful to address physical aspects and facilities of tunnels, current approaches in the road tunnel field have limitations to model organizational aspects, software behavior and the adaptation of the tunnel system over time. This paper reviews the aforementioned limitations and highlights the need to enhance the safety assessment process of these critical infrastructures with a complementary approach that links the organizational factors to the operational and technical issues, analyze software behavior and models the dynamics of the tunnel system. To achieve this objective, this paper examines the scope for introducing a safety assessment method which is based on the systems thinking paradigm and draws upon the STAMP model. The method proposed is demonstrated through a case study of a tunnel ventilation system and the results show that it has the potential to identify scenarios that encompass both the technical system and the organizational structure. However, since the method does not provide quantitative estimations of risk, it is recommended to be used as a complementary approach to the traditional risk assessments rather than as an alternative. (C) 2012 Elsevier Ltd. All rights reserved

    Air Traffic Management Safety Challenges

    No full text
    The primary goal of the Air Traffic Management (ATM) system is to control accident risk. ATM safety has improved over the decades for many reasons, from better equipment to additional safety defences. But ATM safety targets, improving on current performance, are now extremely demanding. Safety analysts and aviation decision-makers have to make safety assessments based on statistically incomplete evidence. If future risks cannot be estimated with precision, then how is safety to be assured with traffic growth and operational/technical changes? What are the design implications for the USA’s ‘Next Generation Air Transportation System’ (NextGen) and Europe’s Single European Sky ATM Research Programme (SESAR)? ATM accident precursors arise from (eg) pilot/controller workload, miscommunication, and lack of upto- date information. Can these accident precursors confidently be ‘designed out’ by (eg) better system knowledge across ATM participants, automatic safety checks, and machine rather than voice communication? Future potentially hazardous situations could be as ‘messy’ in system terms as the Überlingen mid-air collision. Are ATM safety regulation policies fit for purpose: is it more and more difficult to innovate, to introduce new technologies and novel operational concepts? Must regulators be more active, eg more inspections and monitoring of real operational and organisational practices

    Model-based dependability analysis : state-of-the-art, challenges and future outlook

    Get PDF
    Abstract: Over the past two decades, the study of model-based dependability analysis has gathered significant research interest. Different approaches have been developed to automate and address various limitations of classical dependability techniques to contend with the increasing complexity and challenges of modern safety-critical system. Two leading paradigms have emerged, one which constructs predictive system failure models from component failure models compositionally using the topology of the system. The other utilizes design models - typically state automata - to explore system behaviour through fault injection. This paper reviews a number of prominent techniques under these two paradigms, and provides an insight into their working mechanism, applicability, strengths and challenges, as well as recent developments within these fields. We also discuss the emerging trends on integrated approaches and advanced analysis capabilities. Lastly, we outline the future outlook for model-based dependability analysis
    • 

    corecore