654 research outputs found

    Short-term climate response to a freshwater pulse in the Southern Ocean

    Get PDF
    The short-term response of the climate system to a freshwater anomaly in the Southern Ocean is investigated using a coupled global climate model. As a result of the anomaly, ventilation of deep waters around Antarctica is inhibited, causing a warming of the deep ocean, and a cooling of the surface. The surface cooling causes Antarctic sea-ice to thicken and increase in extent, and this leads to a cooling of Southern Hemisphere surface air temperature. The surface cooling increases over the first 5 years, then remains constant over the next 5 years. There is a more rapid response in the Pacific Ocean, which transmits a signal to the Northern Hemisphere, ultimately causing a shift to the negative phase of the North Atlantic Oscillation in years 5–10

    On binary reflected Gray codes and functions

    Get PDF
    AbstractThe binary reflected Gray code function b is defined as follows: If m is a nonnegative integer, then b(m) is the integer obtained when initial zeros are omitted from the binary reflected Gray code of m.This paper examines this Gray code function and its inverse and gives simple algorithms to generate both. It also simplifies Conder's result that the jth letter of the kth word of the binary reflected Gray code of length n is 2n-2n-j-1⌊2n-2n-j-1-k/2⌋mod2by replacing the binomial coefficient by k-12n-j+1+12

    Targeted validation: validating clinical prediction models in their intended population and setting

    Get PDF
    Clinical prediction models must be appropriately validated before they can be used. While validation studies are sometimes carefully designed to match an intended population/setting of the model, it is common for validation studies to take place with arbitrary datasets, chosen for convenience rather than relevance. We call estimating how well a model performs within the intended population/setting "targeted validation". Use of this term sharpens the focus on the intended use of a model, which may increase the applicability of developed models, avoid misleading conclusions, and reduce research waste. It also exposes that external validation may not be required when the intended population for the model matches the population used to develop the model; here, a robust internal validation may be sufficient, especially if the development dataset was large

    Multiple imputation with missing indicators as proxies for unmeasured variables: simulation study

    Get PDF
    From Springer Nature via Jisc Publications RouterHistory: received 2019-11-18, accepted 2020-06-28, registration 2020-06-29, pub-electronic 2020-07-08, online 2020-07-08, collection 2020-12Publication status: PublishedFunder: Medical Research Council; doi: http://dx.doi.org/10.13039/501100000265; Grant(s): MR/T025085/1Abstract: Background: Within routinely collected health data, missing data for an individual might provide useful information in itself. This occurs, for example, in the case of electronic health records, where the presence or absence of data is informative. While the naive use of missing indicators to try to exploit such information can introduce bias, its use in conjunction with multiple imputation may unlock the potential value of missingness to reduce bias in causal effect estimation, particularly in missing not at random scenarios and where missingness might be associated with unmeasured confounders. Methods: We conducted a simulation study to determine when the use of a missing indicator, combined with multiple imputation, would reduce bias for causal effect estimation, under a range of scenarios including unmeasured variables, missing not at random, and missing at random mechanisms. We use directed acyclic graphs and structural models to elucidate a variety of causal structures of interest. We handled missing data using complete case analysis, and multiple imputation with and without missing indicator terms. Results: We find that multiple imputation combined with a missing indicator gives minimal bias for causal effect estimation in most scenarios. In particular the approach: 1) does not introduce bias in missing (completely) at random scenarios; 2) reduces bias in missing not at random scenarios where the missing mechanism depends on the missing variable itself; and 3) may reduce or increase bias when unmeasured confounding is present. Conclusion: In the presence of missing data, careful use of missing indicators, combined with multiple imputation, can improve causal effect estimation when missingness is informative, and is not detrimental when missingness is at random

    Geographic Variation in the Structure of Kentucky’s Population Health Systems: An Urban, Rural, and Appalachian Comparison

    Get PDF
    Introduction: Research examining geographic variation in the structure of population health systems is continuing to emerge, and most of the evidence that currently exists divides systems by urban and rural designation. Very little is understood about how being rural and Appalachian impacts population health system structure and strength. Purpose: This study examines geographic differences in key characteristics of population health systems in urban, rural non-Appalachian, and rural Appalachian regions of Kentucky. Methods: Data from a 2018 statewide survey of community networks was used to examine population health system characteristics. Descriptive statistics were generated to examine variation across geographic regions in the availability of 20 population health activities, the range of organizations that contribute to those activities, and system strength. Data were collected in 2018 and analyzed in 2020. Results: Variation in the provision of population health protections and the structure of public health systems across KY exists. Urban communities are more likely than rural to have a comprehensive set of population health protections delivered in collaboration with a diverse set of multisector partners. Rural Appalachian communities face additional limited capacity in the delivery of population health activities, compared to other rural communities in the state. Implications: Understanding the delivery of population health provides further insight into additional system-level factors that may drive persistent health inequities in rural and Appalachian communities. The capacity to improve health happens beyond the clinic, and the strengthening of population health systems will be a critical step in efforts to improve population health

    Summary of Results from the 2016 National Health Security Preparedness Index

    Get PDF
    The National Health Security Preparedness Index tracks the nation’s progress in preparing for, responding to, and recovering from disasters and other large-scale emergencies that pose risks to health and well-being in the United States. Because health security is a responsibility shared by many different stakeholders in government and society, the Index combines measures from multiple sources and perspectives to offer a broad view of the health protections in place for nation as a whole and for each U.S. state. The Index identifies strengths as well as gaps in the protections needed to keep people safe and healthy in the face of disasters, and it tracks how these protections vary across the U.S. and change over time. Results from the 2016 release of the Index, containing data from 2013 through 2015, reveal that preparedness is improving overall, but protections remain uneven across the U.S., and they are losing strength in some critical areas

    Clinical prediction models and the multiverse of madness

    Get PDF
    Background Each year, thousands of clinical prediction models are developed to make predictions (e.g. estimated risk) to inform individual diagnosis and prognosis in healthcare. However, most are not reliable for use in clinical practice. Main body We discuss how the creation of a prediction model (e.g. using regression or machine learning methods) is dependent on the sample and size of data used to develop it—were a different sample of the same size used from the same overarching population, the developed model could be very different even when the same model development methods are used. In other words, for each model created, there exists a multiverse of other potential models for that sample size and, crucially, an individual’s predicted value (e.g. estimated risk) may vary greatly across this multiverse. The more an individual’s prediction varies across the multiverse, the greater the instability. We show how small development datasets lead to more different models in the multiverse, often with vastly unstable individual predictions, and explain how this can be exposed by using bootstrapping and presenting instability plots. We recommend healthcare researchers seek to use large model development datasets to reduce instability concerns. This is especially important to ensure reliability across subgroups and improve model fairness in practice. Conclusions Instability is concerning as an individual’s predicted value is used to guide their counselling, resource prioritisation, and clinical decision making. If different samples lead to different models with very different predictions for the same individual, then this should cast doubt into using a particular model for that individual. Therefore, visualising, quantifying and reporting the instability in individual-level predictions is essential when proposing a new model

    Summary of Proposed Updates to the National Health Security Preparedness Index for 2015-2016

    Get PDF
    This report describes proposed updates in methodology and measures for the 2015-16 release of the National Health Security Preparedness Inde

    How to develop, externally validate, and update multinomial prediction models

    Full text link
    Multinomial prediction models (MPMs) have a range of potential applications across healthcare where the primary outcome of interest has multiple nominal or ordinal categories. However, the application of MPMs is scarce, which may be due to the added methodological complexities that they bring. This article provides a guide of how to develop, externally validate, and update MPMs. Using a previously developed and validated MPM for treatment outcomes in rheumatoid arthritis as an example, we outline guidance and recommendations for producing a clinical prediction model using multinomial logistic regression. This article is intended to supplement existing general guidance on prediction model research. This guide is split into three parts: 1) Outcome definition and variable selection, 2) Model development, and 3) Model evaluation (including performance assessment, internal and external validation, and model recalibration). We outline how to evaluate and interpret the predictive performance of MPMs. R code is provided. We recommend the application of MPMs in clinical settings where the prediction of a nominal polytomous outcome is of interest. Future methodological research could focus on MPM-specific considerations for variable selection and sample size criteria for external validation
    • …
    corecore