6,999 research outputs found

    Development, test and comparison of two Multiple Criteria Decision Analysis(MCDA) models: A case of healthcare infrastructure location

    Get PDF
    When planning a new development, location decisions have always been a major issue. This paper examines and compares two modelling methods used to inform a healthcare infrastructure location decision. Two Multiple Criteria Decision Analysis (MCDA) models were developed to support the optimisation of this decision-making process, within a National Health Service (NHS) organisation, in the UK. The proposed model structure is based on seven criteria (environment and safety, size, total cost, accessibility, design, risks and population profile) and 28 sub-criteria. First, Evidential Reasoning (ER) was used to solve the model, then, the processes and results were compared with the Analytical Hierarchy Process (AHP). It was established that using ER or AHP led to the same solutions. However, the scores between the alternatives were significantly different; which impacted the stakeholders‟ decision-making. As the processes differ according to the model selected, ER or AHP, it is relevant to establish the practical and managerial implications for selecting one model or the other and providing evidence of which models best fit this specific environment. To achieve an optimum operational decision it is argued, in this study, that the most transparent and robust framework is achieved by merging ER process with the pair-wise comparison, an element of AHP. This paper makes a defined contribution by developing and examining the use of MCDA models, to rationalise new healthcare infrastructure location, with the proposed model to be used for future decision. Moreover, very few studies comparing different MCDA techniques were found, this study results enable practitioners to consider even further the modelling characteristics to ensure the development of a reliable framework, even if this means applying a hybrid approach

    Understanding and Evaluating Assurance Cases

    Get PDF
    Assurance cases are a method for providing assurance for a system by giving an argument to justify a claim about the system, based on evidence about its design, development, and tested behavior. In comparison with assurance based on guidelines or standards (which essentially specify only the evidence to be produced), the chief novelty in assurance cases is provision of an explicit argument. In principle, this can allow assurance cases to be more finely tuned to the specific circumstances of the system, and more agile than guidelines in adapting to new techniques and applications. The first part of this report (Sections 1-4) provides an introduction to assurance cases. Although this material should be accessible to all those with an interest in these topics, the examples focus on software for airborne systems, traditionally assured using the DO-178C guidelines and its predecessors. A brief survey of some existing assurance cases is provided in Section 5. The second part (Section 6) considers the criteria, methods, and tools that may be used to evaluate whether an assurance case provides sufficient confidence that a particular system or service is fit for its intended use. An assurance case cannot provide unequivocal "proof" for its claim, so much of the discussion focuses on the interpretation of such less-than-definitive arguments, and on methods to counteract confirmation bias and other fallibilities in human reasoning

    An Investigation of Proposed Techniques for Quantifying Confidence in Assurance Arguments

    Get PDF
    The use of safety cases in certification raises the question of assurance argument sufficiency and the issue of confidence (or uncertainty) in the argument's claims. Some researchers propose to model confidence quantitatively and to calculate confidence in argument conclusions. We know of little evidence to suggest that any proposed technique would deliver trustworthy results when implemented by system safety practitioners. Proponents do not usually assess the efficacy of their techniques through controlled experiment or historical study. Instead, they present an illustrative example where the calculation delivers a plausible result. In this paper, we review current proposals, claims made about them, and evidence advanced in favor of them. We then show that proposed techniques can deliver implausible results in some cases. We conclude that quantitative confidence techniques require further validation before they should be recommended as part of the basis for deciding whether an assurance argument justifies fielding a critical system

    Can we verify and intrinsically validate risk assessment results? What progress is being made to increase QRA trustworthiness?

    Get PDF
    PresentationThe purpose of a risk assessment is to make a decision whether the risk of a given situation is acceptable, and, if not, how we can reduce it to a tolerable level. For many cases, this can be done in a semi-quantitative fashion. For more complex or problematic cases a quantitative approach is required. Anybody who has been involved in such a study is aware of the difficulties and pitfalls. Despite proven software many choices of parameters must be made and many uncertainties remain. The thoroughness of the study can make quite a difference in the result. Independently, analysts can arrive at results that differ orders of magnitude, especially if uncertainties are not included. Because for important decisions on capital projects there are always proponents and opponents, there is often a tense situation in which conflict is looming. The paper will first briefly review a standard procedure introduced for safety cases on products that must provide more or less a guarantee that the risk of use is below a certain value. Next will be the various approaches how to deal with uncertainties in a quantitative risk assessment and the follow-on decision process. Over the last few years several new developments have been made to achieve, to a certain extent, a hold on so-called deep uncertainty. Expert elicitation and its limitations is another aspect. The paper will be concluded with some practical recommendations

    E-Synthesis: A Bayesian Framework for Causal Assessment in Pharmacosurveillance

    Get PDF
    Background: Evidence suggesting adverse drug reactions often emerges unsystematically and unpredictably in form of anecdotal reports, case series and survey data. Safety trials and observational studies also provide crucial information regarding the (un-)safety of drugs. Hence, integrating multiple types of pharmacovigilance evidence is key to minimising the risks of harm. Methods: In previous work, we began the development of a Bayesian framework for aggregating multiple types of evidence to assess the probability of a putative causal link between drugs and side effects. This framework arose out of a philosophical analysis of the Bradford Hill Guidelines. In this article, we expand the Bayesian framework and add “evidential modulators,” which bear on the assessment of the reliability of incoming study results. The overall framework for evidence synthesis, “E-Synthesis”, is then applied to a case study. Results: Theoretically and computationally, E-Synthesis exploits coherence of partly or fully independent evidence converging towards the hypothesis of interest (or of conflicting evidence with respect to it), in order to update its posterior probability. With respect to other frameworks for evidence synthesis, our Bayesian model has the unique feature of grounding its inferential machinery on a consolidated theory of hypothesis confirmation (Bayesian epistemology), and in allowing any data from heterogeneous sources (cell-data, clinical trials, epidemiological studies), and methods (e.g., frequentist hypothesis testing, Bayesian adaptive trials, etc.) to be quantitatively integrated into the same inferential framework. Conclusions: E-Synthesis is highly flexible concerning the allowed input, while at the same time relying on a consistent computational system, that is philosophically and statistically grounded. Furthermore, by introducing evidential modulators, and thereby breaking up the different dimensions of evidence (strength, relevance, reliability), E-Synthesis allows them to be explicitly tracked in updating causal hypotheses
    corecore