53,442 research outputs found

    Evaluating Density Forecasts

    Get PDF
    We propose methods for evaluating density forecasts. We focus primarily on methods that are applicable regardless of the particular user's loss function. We illustrate the methods with a detailed simulation example, and then we present an application to density forecasting of daily stock market returns. We discuss extensions for improving suboptimal density forecasts, multi-step-ahead density forecast evaluation, multivariate density forecast evaluation, monitoring for structural change and its relationship to density forecasting, and density forecast evaluation with known loss function.

    Using Topological Data Analysis for diagnosis pulmonary embolism

    Full text link
    Pulmonary Embolism (PE) is a common and potentially lethal condition. Most patients die within the first few hours from the event. Despite diagnostic advances, delays and underdiagnosis in PE are common.To increase the diagnostic performance in PE, current diagnostic work-up of patients with suspected acute pulmonary embolism usually starts with the assessment of clinical pretest probability using plasma d-Dimer measurement and clinical prediction rules. The most validated and widely used clinical decision rules are the Wells and Geneva Revised scores. We aimed to develop a new clinical prediction rule (CPR) for PE based on topological data analysis and artificial neural network. Filter or wrapper methods for features reduction cannot be applied to our dataset: the application of these algorithms can only be performed on datasets without missing data. Instead, we applied Topological data analysis (TDA) to overcome the hurdle of processing datasets with null values missing data. A topological network was developed using the Iris software (Ayasdi, Inc., Palo Alto). The PE patient topology identified two ares in the pathological group and hence two distinct clusters of PE patient populations. Additionally, the topological netowrk detected several sub-groups among healthy patients that likely are affected with non-PE diseases. TDA was further utilized to identify key features which are best associated as diagnostic factors for PE and used this information to define the input space for a back-propagation artificial neural network (BP-ANN). It is shown that the area under curve (AUC) of BP-ANN is greater than the AUCs of the scores (Wells and revised Geneva) used among physicians. The results demonstrate topological data analysis and the BP-ANN, when used in combination, can produce better predictive models than Wells or revised Geneva scores system for the analyzed cohortComment: 18 pages, 5 figures, 6 tables. arXiv admin note: text overlap with arXiv:cs/0308031 by other authors without attributio

    Can the Heinrich ratio be used to predict harm from medication errors?

    Get PDF
    The purpose of this study was to establish whether, for medication errors, there exists a fixed Heinrich ratio between the number of incidents which did not result in harm, the number that caused minor harm, and the number that caused serious harm. If this were the case then it would be very useful in estimating any changes in harm following an intervention. Serious harm resulting from medication errors is relatively rare, so it can take a great deal of time and resource to detect a significant change. If the Heinrich ratio exists for medication errors, then it would be possible, and far easier, to measure the much more frequent number of incidents that did not result in harm and the extent to which they changed following an intervention; any reduction in harm could be extrapolated from this

    Evaluating density forecasts

    Get PDF
    The authors propose methods for evaluating and improving density forecasts. They focus primarily on methods that are applicable regardless of the particular user's loss function, though they take explicit account of the relationships between density forecasts, action choices, and the corresponding expected loss throughout. They illustrate the methods with a detailed series of examples, and they discuss extensions to improving and combining suboptimal density forecasts, multistep-ahead density forecast evaluation, multivariate density forecast evaluation, monitoring for structural change and its relationship to density forecasting, and density forecast evaluation with known loss function.Forecasting

    The substantive and practical significance of citation impact differences between institutions: Guidelines for the analysis of percentiles using effect sizes and confidence intervals

    Full text link
    In our chapter we address the statistical analysis of percentiles: How should the citation impact of institutions be compared? In educational and psychological testing, percentiles are already used widely as a standard to evaluate an individual's test scores - intelligence tests for example - by comparing them with the percentiles of a calibrated sample. Percentiles, or percentile rank classes, are also a very suitable method for bibliometrics to normalize citations of publications in terms of the subject category and the publication year and, unlike the mean-based indicators (the relative citation rates), percentiles are scarcely affected by skewed distributions of citations. The percentile of a certain publication provides information about the citation impact this publication has achieved in comparison to other similar publications in the same subject category and publication year. Analyses of percentiles, however, have not always been presented in the most effective and meaningful way. New APA guidelines (American Psychological Association, 2010) suggest a lesser emphasis on significance tests and a greater emphasis on the substantive and practical significance of findings. Drawing on work by Cumming (2012) we show how examinations of effect sizes (e.g. Cohen's d statistic) and confidence intervals can lead to a clear understanding of citation impact differences
    • …
    corecore