7,381 research outputs found

    Accident Analysis and Prevention: Course Notes 1987/88

    Get PDF
    This report consists of the notes from a series of lectures given by the authors for a course entitled Accident Analysis and Prevention. The course took place during the second term of a one year Masters degree course in Transport Planning and Engineering run by the Institute for Transport Studies and the Department of Civil Engineering at the University of Leeds. The course consisted of 18 lectures of which 16 are reported on in this document (the remaining two, on Human Factors, are not reported on in this document as no notes were provided). Each lecture represents one chapter of this document, except in two instances where two lectures are covered in one chapter (Chapters 10 and 14). The course first took place in 1988, and at the date of publication has been run for a second time. This report contains the notes for the initial version of the course. A number of changes were made in the content and emphasis of the course during its second run, mainly due to a change of personnel, with different ideas and experiences in the field of accident analysis and prevention. It is likely that each time the course is run, there will be significant changes, but that the notes provided in this document can be considered to contain a number of the core elements of any future version of the course

    Statistical approaches to the surveillance of infectious diseases for veterinary public health

    Get PDF
    This technical report covers the aspect of using statistical methodology for the monitoring of routinely collected surveillance data in veterinary public health. An account of the Farrington algorithm and Poisson cumulative sum schemes for the detection of aberrations is given with special attention devoted to the occurrence of seasonality and spatial aggregation of the time series. Modelling approaches for retrospective analysis of surveillance counts are described. To illustrate the applicability of the methodology in veterinary public health, data from the surveillance of rabies among fox in Hesse, Germany, are analysed

    An investigation of estimation performance for a multivariate Poisson-gamma model with parameter dependency

    Get PDF
    Statistical analysis can be overly reliant on naive assumptions of independence between different data generating processes. This results in having greater uncertainty when estimating underlying characteristics of processes as dependency creates an opportunity to boost sample size by incorporating more data into the analysis. However, this assumes that dependency has been appropriately specified, as mis-specified dependency can provide misleading information from the data. The main aim of this research is to investigate the impact of incorporating dependency into the data analysis. Our motivation for this work is concerned with estimating the reliability of items and as such we have restricted our investigation to study homogeneous Poisson processes (HPP), which can be used to model the rate of occurrence of events such as failures. In an HPP, dependency between rates can occur for numerous reasons. Whether it is similarity in mechanical designs, failure occurrence due to a common management culture or comparable failure count across machines for same failure modes. Multiple types of dependencies are considered. Dependencies can take different forms, such as simple linear dependency measured through the Pearson correlation, rank dependencies which capture non-linear dependencies and tail dependencies where the strength of the dependency may be stronger in extreme events as compared to more moderate one. The estimation of the measure of dependency between correlated processes can be challenging. We develop the research grounded in a Bayes or empirical Bayes inferential framework, where uncertainty in the actual rate of occurrence of a process is modelled with a prior probability distribution. We consider prior distributions to belong to the Gamma distribution given its flexibility and mathematical association with the Poisson process. For dependency modelling between processes we consider copulas which are a convenient and flexible way of capturing a variety of different dependency characteristics between distributions. We use a multivariate Poisson – Gamma probability model. The Poisson process captures aleatory uncertainty, the inherent variability in the data. Whereas the Gamma prior describes the epistemic uncertainty. By pooling processes with correlated underlying mean rate we are able to incorporate data from these processes into the inferential process and reduce the estimation error. There are three key research themes investigated in this thesis. First, to investigate the value in reducing estimation error by incorporating dependency within the analysis via theoretical analysis and simulation experiments. We show that correctly accounting for dependency can significantly reduce the estimation error. The findings should inform analysts a priori as to whether it is worth pursuing a more complex analysis for which the dependency parameter needs to be elicited. Second, to examine the consequences of mis-specifying the degree and form of dependency through controlled simulation experiments. We show the relative robustness of different ways of modelling the dependency using copula and Bayesian methods. The findings should inform analysts about the sensitivity of modelling choices. Third, to show how we can operationalise different methods for representing dependency through an industry case study. We show the consequences for a simple decision problem associated with the provision of spare parts to maintain operation of the industry process when depenency between event rates of the machines is appropriately modelled rather than being treated as independent processes.Statistical analysis can be overly reliant on naive assumptions of independence between different data generating processes. This results in having greater uncertainty when estimating underlying characteristics of processes as dependency creates an opportunity to boost sample size by incorporating more data into the analysis. However, this assumes that dependency has been appropriately specified, as mis-specified dependency can provide misleading information from the data. The main aim of this research is to investigate the impact of incorporating dependency into the data analysis. Our motivation for this work is concerned with estimating the reliability of items and as such we have restricted our investigation to study homogeneous Poisson processes (HPP), which can be used to model the rate of occurrence of events such as failures. In an HPP, dependency between rates can occur for numerous reasons. Whether it is similarity in mechanical designs, failure occurrence due to a common management culture or comparable failure count across machines for same failure modes. Multiple types of dependencies are considered. Dependencies can take different forms, such as simple linear dependency measured through the Pearson correlation, rank dependencies which capture non-linear dependencies and tail dependencies where the strength of the dependency may be stronger in extreme events as compared to more moderate one. The estimation of the measure of dependency between correlated processes can be challenging. We develop the research grounded in a Bayes or empirical Bayes inferential framework, where uncertainty in the actual rate of occurrence of a process is modelled with a prior probability distribution. We consider prior distributions to belong to the Gamma distribution given its flexibility and mathematical association with the Poisson process. For dependency modelling between processes we consider copulas which are a convenient and flexible way of capturing a variety of different dependency characteristics between distributions. We use a multivariate Poisson – Gamma probability model. The Poisson process captures aleatory uncertainty, the inherent variability in the data. Whereas the Gamma prior describes the epistemic uncertainty. By pooling processes with correlated underlying mean rate we are able to incorporate data from these processes into the inferential process and reduce the estimation error. There are three key research themes investigated in this thesis. First, to investigate the value in reducing estimation error by incorporating dependency within the analysis via theoretical analysis and simulation experiments. We show that correctly accounting for dependency can significantly reduce the estimation error. The findings should inform analysts a priori as to whether it is worth pursuing a more complex analysis for which the dependency parameter needs to be elicited. Second, to examine the consequences of mis-specifying the degree and form of dependency through controlled simulation experiments. We show the relative robustness of different ways of modelling the dependency using copula and Bayesian methods. The findings should inform analysts about the sensitivity of modelling choices. Third, to show how we can operationalise different methods for representing dependency through an industry case study. We show the consequences for a simple decision problem associated with the provision of spare parts to maintain operation of the industry process when depenency between event rates of the machines is appropriately modelled rather than being treated as independent processes

    What is the impact of duplicate coverage on the demand for health care in Germany?

    Get PDF
    Duplicate coverage involves those individuals who hold public health insurance, and purchase additional private coverage. Using data from the German Institute for Economic Research, we try to investigate the impact of duplicate coverage on the demand for healthcare (measured in number of visits to doctors). Given the simultaneity of the choices to take out additional private health insurance coverage, we estimate a negative binomial model to measure this impact. We also estimate a a Full Information Maximun Loglikelihood (FIML), known as Endogenous Switching Poisson Count Model and we compare these results with the standard maximum log likelihood (ML) estimators of the negative binomial model. The Results show that, there is a positive difference on the level of health services demanded when there is a duplicate coverage. We found also that there is evidence to think that in Germany there is a feedback between duplicate coverage and the demand of health services.Health care services demand, health insurance

    Guidelines for Analysis of Data Related to Ageing of Nuclear Power Plant Components and Systems

    Get PDF
    This guideline is intended to provide practical methods for practitioners to use in analyzing component and system reliability data, with a focus on detection and modeling of ageing. The emphasis is on frequentist and Bayesian approaches, implemented with MS EXCEL and the open-source software package WinBUGS. The methods described in this document can be implemented with other software packages.JRC.F.5-Safety of present nuclear reactor

    Core-collapse astrophysics with a five-megaton neutrino detector

    Get PDF
    The legacy of solar neutrinos suggests that large neutrino detectors should be sited underground. However, to instead go underwater bypasses the need to move mountains, allowing much larger water Čerenkov detectors. We show that reaching a detector mass scale of ~5 Megatons, the size of the proposed Deep-TITAND, would permit observations of neutrino “mini-bursts” from supernovae in nearby galaxies on a roughly yearly basis, and we develop the immediate qualitative and quantitative consequences. Importantly, these mini-bursts would be detected over backgrounds without the need for optical evidence of the supernova, guaranteeing the beginning of time-domain MeV neutrino astronomy. The ability to identify, to the second, every core collapse in the local Universe would allow a continuous “death watch” of all stars within ~5  Mpc, making practical many previously-impossible tasks in probing rare outcomes and refining coordination of multiwavelength/multiparticle observations and analysis. These include the abilities to promptly detect otherwise-invisible prompt black hole formation, provide advance warning for supernova shock-breakout searches, define tight time windows for gravitational-wave searches, and identify “supernova impostors” by the nondetection of neutrinos. Observations of many supernovae, even with low numbers of detected neutrinos, will help answer questions about supernovae that cannot be resolved with a single high-statistics event in the Milky Way
    corecore