78 research outputs found

    Estimating rate of occurrence of rare events with empirical Bayes : a railway application

    Get PDF
    Classical approaches to estimating the rate of occurrence of events perform poorly when data are few. Maximum likelihood estimators result in overly optimistic point estimates of zero for situations where there have been no events. Alternative empirical-based approaches have been proposed based on median estimators or non-informative prior distributions. While these alternatives offer an improvement over point estimates of zero, they can be overly conservative. Empirical Bayes procedures offer an unbiased approach through pooling data across different hazards to support stronger statistical inference. This paper considers the application of Empirical Bayes to high consequence low-frequency events, where estimates are required for risk mitigation decision support such as as low as reasonably possible. A summary of empirical Bayes methods is given and the choices of estimation procedures to obtain interval estimates are discussed. The approaches illustrated within the case study are based on the estimation of the rate of occurrence of train derailments within the UK. The usefulness of empirical Bayes within this context is discusse

    Expert Elicitation for Reliable System Design

    Full text link
    This paper reviews the role of expert judgement to support reliability assessments within the systems engineering design process. Generic design processes are described to give the context and a discussion is given about the nature of the reliability assessments required in the different systems engineering phases. It is argued that, as far as meeting reliability requirements is concerned, the whole design process is more akin to a statistical control process than to a straightforward statistical problem of assessing an unknown distribution. This leads to features of the expert judgement problem in the design context which are substantially different from those seen, for example, in risk assessment. In particular, the role of experts in problem structuring and in developing failure mitigation options is much more prominent, and there is a need to take into account the reliability potential for future mitigation measures downstream in the system life cycle. An overview is given of the stakeholders typically involved in large scale systems engineering design projects, and this is used to argue the need for methods that expose potential judgemental biases in order to generate analyses that can be said to provide rational consensus about uncertainties. Finally, a number of key points are developed with the aim of moving toward a framework that provides a holistic method for tracking reliability assessment through the design process.Comment: This paper commented in: [arXiv:0708.0285], [arXiv:0708.0287], [arXiv:0708.0288]. Rejoinder in [arXiv:0708.0293]. Published at http://dx.doi.org/10.1214/088342306000000510 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Specific and General Nonsense?

    Get PDF
    In a previous article, I dealt with the argument that the present law on the intoxication defence was well-founded on legal authority and concluded that it was not. I then suggested that those wishing to uphold the present law as represented by Leary v. The Queen and D.PP v. Majewski would have to find support in other arguments. The purpose of this article is therefore to examine those arguments to see whether they provide sufficient ground for the current state of the law in Canada and England. In particular, the specific-general intent dichotomy will be examined in this light

    A Shorn Beard

    Get PDF
    One of the prominent features of the common law is the concept of stare decisis. As a mechanism to provide certainty and predictability in the law, it is invaluable. Nonetheless, the doctrine of binding precedent, essential though it is to the orderly development of the law, can be misused. At times, the disingenuous application of stare decisis can lead to severe distortion of the law from what was actually meant in the case being cited as authority. Such, I submit, is the case with the intoxication rules

    The Impact of the Charter on the Law of Search and Seizure

    Get PDF
    This paper provides an overview of the impact of the Canadian Charter of Rights and Freedoms on the law of search and seizure. Prior to 1982, there were few remedies available if the state authorities failed to comply with the law regarding searches and seizures. This situation changed with the passage of the Charter. Courts became empowered to strike down laws governing search and seizure if they did not comply with constitutional standards, in particular the protection against unreasonable search or seizure in section 8. Perhaps more important, there was now the possibility of excluding evidence obtained in the course of a Charter violation under section 24(2). The Supreme Court began the Charter era by making important pronouncements on the purpose behind section 8 — to protect privacy interests, rather than property — and by declaring a preference for a warrant issued by a judge as authorization for a search or seizure if it was feasible to obtain such prior authorization. Parliament and legislatures seemed to adapt reasonably easily to this regime. Unfortunately, after these initial principles were established and applied in some subsequent cases, the law of search and seizure began to revert to a more property-based approach and, in some other respects, to depart from the framework that was established in the beginning of Charter jurisprudence. This was accomplished through a narrowing of the class of persons who might claim a reasonable expectation of privacy and by holding that the Charter has no application to certain types of intrusions by state officials. Finally, the paper assesses the extent to which the admission or exclusion of evidence after a Charter violation has been proved has strengthened or weakened the constitutional protection afforded by section 8

    The Impact of the Charter on the Law of Search and Seizure

    Get PDF
    This paper provides an overview of the impact of the Canadian Charter of Rights and Freedoms on the law of search and seizure. Prior to 1982, there were few remedies available if the state authorities failed to comply with the law regarding searches and seizures. This situation changed with the passage of the Charter. Courts became empowered to strike down laws governing search and seizure if they did not comply with constitutional standards, in particular the protection against unreasonable search or seizure in section 8. Perhaps more important, there was now the possibility of excluding evidence obtained in the course of a Charter violation under section 24(2). The Supreme Court began the Charter era by making important pronouncements on the purpose behind section 8 — to protect privacy interests, rather than property — and by declaring a preference for a warrant issued by a judge as authorization for a search or seizure if it was feasible to obtain such prior authorization. Parliament and legislatures seemed to adapt reasonably easily to this regime. Unfortunately, after these initial principles were established and applied in some subsequent cases, the law of search and seizure began to revert to a more property-based approach and, in some other respects, to depart from the framework that was established in the beginning of Charter jurisprudence. This was accomplished through a narrowing of the class of persons who might claim a reasonable expectation of privacy and by holding that the Charter has no application to certain types of intrusions by state officials. Finally, the paper assesses the extent to which the admission or exclusion of evidence after a Charter violation has been proved has strengthened or weakened the constitutional protection afforded by section 8

    Historical Exploration - Learning Lessons from the Past to Inform the Future

    Get PDF
    This report examines a number of exploration campaigns that have taken place during the last 700 years, and considers them from a risk perspective. The explorations are those led by Christopher Columbus, Sir Walter Raleigh, John Franklin, Sir Ernest Shackleton, the Company of Scotland to Darien and the Apollo project undertaken by NASA. To provide a wider context for investigating the selected exploration campaigns, we seek ways of finding analogies at mission, programmatic and strategic levels and thereby to develop common themes. Ultimately, the purpose of the study is to understand how risk has shaped past explorations, in order to learn lessons for the future. From this, we begin to identify and develop tools for assessing strategic risk in future explorations. Figure 0.1 (see Page 6) summarizes the key inputs used to shape the study, the process and the results, and provides a graphical overview of the methodology used in the project. The first step was to identify the potential cases that could be assessed and to create criteria for selection. These criteria were collaboratively developed through discussion with a Business Historian. From this, six cases were identified as meeting our key criteria. Preliminary analysis of two of the cases allowed us to develop an evaluation framework that was used across all six cases to ensure consistency. This framework was revised and developed further as all six cases were analyzed. A narrative and summary statistics were created for each exploration case studied, in addition to a method for visualizing the important dimensions that capture major events. These Risk Experience Diagrams illustrate how the realizations of events, linked to different types of risks, have influenced the historical development of each exploration campaign. From these diagrams, we can begin to compare risks across each of the cases using a common framework. In addition, exploration risks were classified in terms of mission, program and strategic risks. From this, a Venn diagram and Belief Network were developed to identify how different exploration risks interacted. These diagrams allow us to quickly view the key risk drivers and their interactions in each of the historical cases. By looking at the context in which individual missions take place we have been able to observe the dynamics within an exploration campaign, and gain an understanding of how these interact with influences from stakeholders and competitors. A qualitative model has been created to capture how these factors interact, and are further challenged by unwanted events such as mission failures and competitor successes. This Dynamic Systemic Risk Model is generic and applies broadly to all the exploration ventures studied. This model is an amalgamation of a System Dynamics model, hence incorporating the natural feedback loops within each exploration mission, and a risk model, in order to ensure that the unforeseen events that may occur can be incorporated into the modeling. Finally, an overview is given of the motivational drivers and summaries are presented of the overall costs borne in each exploration venture. An important observation is that all the cases - with the exception of Apollo - were failures in terms of meeting their original objectives. However, despite this, several were strategic successes and indeed changed goals as needed in an entrepreneurial way. The Risk Experience Diagrams developed for each case were used to quantitatively assess which risks were realized most often during our case studies and to draw comparisons at mission, program and strategic levels. In addition, using the Risk Experience Diagrams and the narrative of each case, specific lessons for future exploration were identified. There are three key conclusions to this study: Analyses of historical cases have shown that there exists a set of generic risk classes. This set of risk classes cover mission, program and strategic levels, and includes all the risks encountered in the cases studied. At mission level these are Leadership Decisions, Internal Events and External Events; at program level these are Lack of Learning, Resourcing and Mission Failure; at Strategic Level they are Programmatic Failure, Stakeholder Perception and Goal Change. In addition there are two further risks that impact at all levels: Self-Interest of Actors, and False Model. There is no reason to believe that these risk classes will not be applicable to future exploration and colonization campaigns. We have deliberately selected a range of different exploration and colonization campaigns, taking place between the 15th Century and the 20th Century. The generic risk framework is able to describe the significant types of risk for these missions. Furthermore, many of these risks relate to how human beings interact and learn lessons to guide their future behavior. Although we are better schooled than our forebears and are technically further advanced, there is no reason to think we are fundamentally better at identifying, prioritizing and controlling these classes of risk. Modern risk modeling techniques are capable of addressing mission and program risk but are not as well suited to strategic risk. We have observed that strategic risks are prevalent throughout historic exploration and colonization campaigns. However, systematic approaches do not exist at the moment to analyze such risks. A risk-informed approach to understanding what happened in the past helps us guard against the danger of assuming that those events were inevitable, and highlights those chance events that produced the history that the world experienced. In turn, it allows us to learn more clearly from the past about the way our modern risk modeling techniques might help us to manage the future - and also bring to light those areas where they may not. This study has been retrospective. Based on this analysis, the potential for developing the work in a prospective way by applying the risk models to future campaigns is discussed. Follow on work from this study will focus on creating a portfolio of tools for assessing strategic and programmatic risk

    Empirical Bayes Methods for Discrete Event Simulation Performance Measure Estimation

    Get PDF
    Discrete event simulation (DES) is a widely-used operational research methodology facilitating the analysis of complex real-world systems. Although, generally speaking, simplicity is greatly desirable in DES modelling applications, in many cases the nature of the underlying system results in simulation models which are large in scale, complex, and expensive to run. As such, the careful design and analysis of simulation experiments is essential to ensure valid and efficient inference concerning DES model performance measures. It is envisaged that empirical Bayes (EB) methods, which enable data to be pooled across a set of populations to support inference of the parameters of a single population, may be of use within this context. Despite this potential, EB has so far been neglected within the DES literature. This paper presents a preliminary computational investigation into the efficacy of EB procedures in the estimation of DES performance measures. The results of this investigation, and their significance, are explored. Additionally, likely directions for future research are also addressed

    Merging expert and empirical data for rare event frequency estimation : pool homogenisation for empirical Bayes models

    Get PDF
    Empirical Bayes provides one approach to estimating the frequency of rare events as a weighted average of the frequencies of an event and a pool of events. The pool will draw upon, for example, events with similar precursors. The higher the degree of homogeneity of the pool, then the Empirical Bayes estimator will be more accurate. We propose and evaluate a new method using homogenisation factors under the assumption that events are generated from a Homogeneous Poisson Process. The homogenisation factors are scaling constants, which can be elicited through structured expert judgement and used to align the frequencies of different events, hence homogenising the pool. The estimation error relative to the homogeneity of the pool is examined theoretically indicating that reduced error is associated with larger pool homogeneity. The effects of misspecified expert assessments of the homogenisation factors are examined theoretically and through simulation experiments. Our results show that the proposed Empirical Bayes method using homogenisation factors is robust under different degrees of misspecification

    Measuring hidden phenotype:Quantifying the shape of barley seeds using the Euler characteristic transform

    Get PDF
    Shape plays a fundamental role in biology. Traditional phenotypic analysis methods measure some features but fail to measure the information embedded in shape comprehensively. To extract, compare and analyse this information embedded in a robust and concise way, we turn to topological data analysis (TDA), specifically the Euler characteristic transform. TDA measures shape comprehensively using mathematical representations based on algebraic topology features. To study its use, we compute both traditional and topological shape descriptors to quantify the morphology of 3121 barley seeds scanned with X-ray computed tomography (CT) technology at 127 μm resolution. The Euler characteristic transform measures shape by analysing topological features of an object at thresholds across a number of directional axes. A Kruskal-Wallis analysis of the information encoded by the topological signature reveals that the Euler characteristic transform picks up successfully the shape of the crease and bottom of the seeds. Moreover, while traditional shape descriptors can cluster the seeds based on their accession, topological shape descriptors can cluster them further based on their panicle. We then successfully train a support vector machine to classify 28 different accessions of barley based exclusively on the shape of their grains. We observe that combining both traditional and topological descriptors classifies barley seeds better than using just traditional descriptors alone. This improvement suggests that TDA is thus a powerful complement to traditional morphometrics to comprehensively describe a multitude of 'hidden' shape nuances which are otherwise not detected.</p
    • …
    corecore