1,444,806 research outputs found
Probabilistic Model-Based Safety Analysis
Model-based safety analysis approaches aim at finding critical failure
combinations by analysis of models of the whole system (i.e. software,
hardware, failure modes and environment). The advantage of these methods
compared to traditional approaches is that the analysis of the whole system
gives more precise results. Only few model-based approaches have been applied
to answer quantitative questions in safety analysis, often limited to analysis
of specific failure propagation models, limited types of failure modes or
without system dynamics and behavior, as direct quantitative analysis is uses
large amounts of computing resources. New achievements in the domain of
(probabilistic) model-checking now allow for overcoming this problem.
This paper shows how functional models based on synchronous parallel
semantics, which can be used for system design, implementation and qualitative
safety analysis, can be directly re-used for (model-based) quantitative safety
analysis. Accurate modeling of different types of probabilistic failure
occurrence is shown as well as accurate interpretation of the results of the
analysis. This allows for reliable and expressive assessment of the safety of a
system in early design stages
A safety analysis approach to clinical workflows : application and evaluation
Clinical workflows are safety critical workflows as they have the potential to cause harm or death to patients. Their safety needs to be considered as early as possible in the development process. Effective safety analysis methods are required to ensure the safety of these high-risk workflows, because errors that may happen through routine workflow could propagate within the workflow to result in harmful failures of the system’s output. This paper shows how to apply an approach for safety analysis of clinic al workflows to analyse the safety of the workflow within a radiology department and evaluates the approach in terms of usability and benefits. The outcomes of using this approach include identification of the root causes of hazardous workflow failures that may put patients’ lives at risk. We show that the approach is applicable to this area of healthcare and is able to present added value through the detailed information on possible failures, of both their causes and effects; therefore, it has the potential to improve the safety of radiology and other clinical workflows
Automating allocation of development assurance levels: An extension to HiP-HOPS
Controlling the allocation of safety requirements across a system's architecture from the early stages of development is an aspiration embodied in numerous major safety standards. Manual approaches of applying this process in practice are ineffective due to the scale and complexity of modern electronic systems. In the work presented here, we aim to address this issue by presenting an extension to the dependability analysis and optimisation tool, HiP-HOPS, which allows automatic allocation of such requirements. We focus on aerospace requirements expressed as Development Assurance Levels (DALs); however, the proposed process and algorithms can be applied to other common forms of expression of safety requirements such as Safety Integrity Levels. We illustrate application to a model of an aircraft wheel braking system
Recommended from our members
Modeling the external software interface for requirements specification
Requirements specification is an important part of the software, indeed the system, development process. It is critical that this effort be started early. This work suggests an early model for software developers to incorporate a systems viewpoint in their process. This model is an attempt to formalize an approach that will include a systematic representation of essentials of the external interface for software that is embedded within a larger system. The model is useful for early analysis of the software system and environment for such things as consistency, completeness, and safety
Safe Neighborhood Computation for Hybrid System Verification
For the design and implementation of engineering systems, performing
model-based analysis can disclose potential safety issues at an early stage.
The analysis of hybrid system models is in general difficult due to the
intrinsic complexity of hybrid dynamics. In this paper, a simulation-based
approach to formal verification of hybrid systems is presented.Comment: In Proceedings HAS 2014, arXiv:1501.0540
QuantUM: Quantitative Safety Analysis of UML Models
When developing a safety-critical system it is essential to obtain an
assessment of different design alternatives. In particular, an early safety
assessment of the architectural design of a system is desirable. In spite of
the plethora of available formal quantitative analysis methods it is still
difficult for software and system architects to integrate these techniques into
their every day work. This is mainly due to the lack of methods that can be
directly applied to architecture level models, for instance given as UML
diagrams. Also, it is necessary that the description methods used do not
require a profound knowledge of formal methods. Our approach bridges this gap
and improves the integration of quantitative safety analysis methods into the
development process. All inputs of the analysis are specified at the level of a
UML model. This model is then automatically translated into the analysis model,
and the results of the analysis are consequently represented on the level of
the UML model. Thus the analysis model and the formal methods used during the
analysis are hidden from the user. We illustrate the usefulness of our approach
using an industrial strength case study.Comment: In Proceedings QAPL 2011, arXiv:1107.074
Efficacy of tofacitinib monotherapy in methotrexate-naive patients with early or established rheumatoid arthritis.
IntroductionTofacitinib is an oral Janus kinase inhibitor for the treatment of rheumatoid arthritis (RA). Tofacitinib monotherapy was previously shown to inhibit structural damage, reduce clinical signs and symptoms of RA, and improve physical functioning over 24 months in methotrexate (MTX)-naive adult patients with RA. In this post hoc analysis, we compared efficacy and safety of tofacitinib in patients with early (disease duration <1 year) versus established (≥1 year) RA.MethodsMTX-naive patients ≥18 years with active RA received tofacitinib monotherapy (5 or 10 mg two times a day, or MTX monotherapy, in a 24-month Phase 3 trial.ResultsOf 956 patients (tofacitinib 5 mg two times a day, n=373; tofacitinib 10 mg two times a day, n=397; MTX, n=186), 54% had early RA. Baseline disease activity and functional disability were similar in both groups; radiographic damage was greater in patients with established RA. At month 24, clinical response rates were significantly greater in patients with early versus established RA in the tofacitinib 5 mg two times a day group. Both tofacitinib doses had greater effects on clinical, functional and radiographic improvements at 1 and 2 years compared with MTX, independent of disease duration. No new safety signals were observed.ConclusionsTreatment response was generally similar in early and established RA; significantly greater improvements were observed at month 24 with tofacitinib 5 mg two times a day in early versus established RA. Tofacitinib 5 and 10 mg two times a day demonstrated greater efficacy versus MTX irrespective of disease duration. No difference in safety profiles was observed between patients with early or established RA.Trial registration numberNCT01039688; Results
Recommended from our members
An Approach to Using Non Safety-Assured Programmable Components in Modest Integrity Systems
Programmable components (like personal computers or smart devices) can offer considerable benefits in terms of usability and functionality in a safety-related system. However there is a problem in justifying the use of programmable components if the components have not been safety justified to an appropriate integrity (e.g. to SIL 1 of IEC 61508). This paper outlines an approach (called LowSIL) developed in the UK CINIF nuclear industry research programme to justify the use of non safety-assured programmable components in modest integrity systems. This is a seven step approach that can be applied to new systems from an early design stage, or retrospectively to existing systems. The stages comprise: system characterisation, component suitability assessment, failure analysis, failure mitigation, identification of additional defences, identification of safety evidence requirements, and collation and evaluation of evidence. In the case of personal computers, there is supporting guidance on usage constraints, claim limits on reliability, and advice on “locking down” the component to maximise reliability. The approach is demonstrated for an example system. The approach has been applied successfully to a range of safety-related systems used in the nuclear industry
Big Data Risk Assessment the 21st Century approach to safety science
Safety Science has been developed over time with notable models in the early 20th Century such as Heinrich’s iceberg model and the Swiss cheese model. Common techniques such fault tree and event tree analyses, HAZOP analysis and bow-ties construction are widely used within industry. These techniques are based on the concept that failures of a system can be caused by deviations or individual faults within a system, combinations of latent failures, or even where each part of a complex system is operating within normal bounds but a combined effect creates a hazardous situation.
In this era of Big Data, systems are becoming increasingly complex, producing such a large quantity of data related to safety that cannot be meaningfully analysed by humans to make decisions or uncover complex trends that may indicate the presence of hazards. More subtle and automated techniques for mining these data are required to provide a better understanding of our systems and the environment within which they operate, and insights to hazards that may not otherwise be identified. Big Data Risk Analysis (BDRA) is a suite of techniques being researched to identify the use of non-traditional techniques from big data sources to predict safety risk.
This paper describes early trials of BDRA that have been conducted on railway signal information and text-based reports of railway safety near misses and the ongoing research that is looking at combining various data sources to uncover obscured trends that cannot be identified by considering each source individually. The paper also discusses how visual analytics may be a key tool in analysing Big Data to support knowledge elicitation and decision-making, as well as providing information in a form that can be readily interpreted by a variety of audiences
- …