57 research outputs found

    Quantifying the Risk and Cost of Active Monitoring for Infectious Diseases

    Get PDF
    During outbreaks of deadly emerging pathogens (e.g., Ebola, MERS-CoV) and bioterror threats (e.g., smallpox), actively monitoring potentially infected individuals aims to limit disease transmission and morbidity. Guidance issued by CDC on active monitoring was a cornerstone of its response to the West Africa Ebola outbreak. There are limited data on how to balance the costs and performance of this important public health activity. We present a framework that estimates the risks and costs of specific durations of active monitoring for pathogens of significant public health concern. We analyze data from New York City's Ebola active monitoring program over a 16-month period in 2014-2016. For monitored individuals, we identified unique durations of active monitoring that minimize expected costs for those at "low (but not zero) risk" and "some or high risk": 21 and 31 days, respectively. Extending our analysis to smallpox and MERS-CoV, we found that the optimal length of active monitoring relative to the median incubation period was reduced compared to Ebola due to less variable incubation periods. Active monitoring can save lives but is expensive. Resources can be most effectively allocated by using exposure-risk categories to modify the duration or intensity of active monitoring

    Case Study in Evaluating Time Series Prediction Models Using the Relative Mean Absolute Error

    Get PDF
    Statistical prediction models inform decision-making processes in many real-world settings. Prior to using predictions in practice, one must rigorously test and validate candidate models to ensure that the proposed predictions have sufficient accuracy to be used in practice. In this article, we present a framework for evaluating time series predictions, which emphasizes computational simplicity and an intuitive interpretation using the relative mean absolute error metric. For a single time series, this metric enables comparisons of candidate model predictions against naïve reference models, a method that can provide useful and standardized performance benchmarks. Additionally, in applications with multiple time series, this framework facilitates comparisons of one or more models’ predictive performance across different sets of data. We illustrate the use of this metric with a case study comparing predictions of dengue hemorrhagic fever incidence in two provinces of Thailand. This example demonstrates the utility and interpretability of the relative mean absolute error metric in practice, and underscores the practical advantages of using relative performance metrics when evaluating predictions

    Times to key events in Zika virus infection and implications for blood donation: A systematic review

    Get PDF
    Objective To estimate the timing of key events in the natural history of Zika virus infection. Methods In February 2016, we searched PubMed, Scopus and the Web of Science for publications containing the term Zika. By pooling data, we estimated the incubation period, the time to seroconversion and the duration of viral shedding. We estimated the risk of Zika virus contaminated blood donations. Findings We identified 20 articles on 25 patients with Zika virus infection. The median incubation period for the infection was estimated to be 5.9 days (95% credible interval, CrI: 4.4-7.6), with 95% of people who developed symptoms doing so within 11.2 days (95% CrI: 7.6-18.0) after infection. On average, seroconversion occurred 9.1 days (95% CrI: 7.0-11.6) after infection. The virus was detectable in blood for 9.9 days (95% CrI: 6.9-21.4) on average. Without screening, the estimated risk that a blood donation would come from an infected individual increased by approximately 1 in 10 000 for every 1 per 100 000 person-days increase in the incidence of Zika virus infection. Symptom-based screening may reduce this rate by 7% (relative risk, RR: 0.93; 95% CrI: 0.89-0.99) and antibody screening, by 29% (RR: 0.71; 95% CrI: 0.28-0.88). Conclusion Neither symptom- nor antibody-based screening for Zika virus infection substantially reduced the risk that blood donations would be contaminated by the virus. Polymerase chain reaction testing should be considered for identifying blood safe for use in pregnant women in high-incidence areas

    Stochastic-dynamical thermostats for constraints and stiff restraints

    Get PDF
    A broad array of canonical sampling methods are available for molecular simulation based on stochastic-dynamical perturbation of Newtonian dynamics, including Langevin dynamics, Stochastic Velo- city Rescaling, and methods that combine Nosé-Hoover dynamics with stochastic perturbation. In this article we discuss several stochastic-dynamical thermostats in the setting of simulating systems with holonomic constraints. The approaches described are easily implemented and facilitate the recovery of correct canonical averages with minimal disturbance of the underlying dynamics. For the purpose of illustrating our results, we examine the numerical application of these methods to a simple atomic chain, where a Fixman term is required to correct the thermodynamic ensemble

    Collaborative Hubs: Making the Most of Predictive Epidemic Modeling

    Get PDF
    The COVID-19 pandemic has made it clear that epidemic models play an important role in how governments and the public respond to infectious disease crises. Early in the pandemic, models were used to estimate the true number of infections. Later, they estimated key parameters, generated short-term forecasts of outbreak trends, and quantified possible effects of interventions on the unfolding epidemic. In contrast to the coordinating role played by major national or international agencies in weather-related emergencies, pandemic modeling efforts were initially scattered across many research institutions. Differences in modeling approaches led to contrasting results, contributing to confusion in public perception of the pandemic. Efforts to coordinate modeling efforts in so-called “hubs” have provided governments, healthcare agencies, and the public with assessments and forecasts that reflect the consensus in the modeling community. This has been achieved by openly synthesizing uncertainties across different modeling approaches and facilitating comparisons between them

    TRY plant trait database – enhanced coverage and open access

    Get PDF
    Plant traits—the morphological, anatomical, physiological, biochemical and phenological characteristics of plants—determine how plants respond to environmental factors, affect other trophic levels, and influence ecosystem properties and their benefits and detriments to people. Plant trait data thus represent the basis for a vast area of research spanning from evolutionary biology, community and functional ecology, to biodiversity conservation, ecosystem and landscape management, restoration, biogeography and earth system modelling. Since its foundation in 2007, the TRY database of plant traits has grown continuously. It now provides unprecedented data coverage under an open access data policy and is the main plant trait database used by the research community worldwide. Increasingly, the TRY database also supports new frontiers of trait‐based plant research, including the identification of data gaps and the subsequent mobilization or measurement of new data. To support this development, in this article we evaluate the extent of the trait data compiled in TRY and analyse emerging patterns of data coverage and representativeness. Best species coverage is achieved for categorical traits—almost complete coverage for ‘plant growth form’. However, most traits relevant for ecology and vegetation modelling are characterized by continuous intraspecific variation and trait–environmental relationships. These traits have to be measured on individual plants in their respective environment. Despite unprecedented data coverage, we observe a humbling lack of completeness and representativeness of these continuous traits in many aspects. We, therefore, conclude that reducing data gaps and biases in the TRY database remains a key challenge and requires a coordinated approach to data mobilization and trait measurements. This can only be achieved in collaboration with other initiatives

    Impact of SARS-CoV-2 vaccination of children ages 5–11 years on COVID-19 disease burden and resilience to new variants in the United States, November 2021–March 2022: A multi-model study

    Get PDF
    Background: The COVID-19 Scenario Modeling Hub convened nine modeling teams to project the impact of expanding SARS-CoV-2 vaccination to children aged 5–11 years on COVID-19 burden and resilience against variant strains. Methods: Teams contributed state- and national-level weekly projections of cases, hospitalizations, and deaths in the United States from September 12, 2021 to March 12, 2022. Four scenarios covered all combinations of 1) vaccination (or not) of children aged 5–11 years (starting November 1, 2021), and 2) emergence (or not) of a variant more transmissible than the Delta variant (emerging November 15, 2021). Individual team projections were linearly pooled. The effect of childhood vaccination on overall and age-specific outcomes was estimated using meta-analyses. Findings: Assuming that a new variant would not emerge, all-age COVID-19 outcomes were projected to decrease nationally through mid-March 2022. In this setting, vaccination of children 5–11 years old was associated with reductions in projections for all-age cumulative cases (7.2%, mean incidence ratio [IR] 0.928, 95% confidence interval [CI] 0.880–0.977), hospitalizations (8.7%, mean IR 0.913, 95% CI 0.834–0.992), and deaths (9.2%, mean IR 0.908, 95% CI 0.797–1.020) compared with scenarios without childhood vaccination. Vaccine benefits increased for scenarios including a hypothesized more transmissible variant, assuming similar vaccine effectiveness. Projected relative reductions in cumulative outcomes were larger for children than for the entire population. State-level variation was observed. Interpretation: Given the scenario assumptions (defined before the emergence of Omicron), expanding vaccination to children 5–11 years old would provide measurable direct benefits, as well as indirect benefits to the all-age U.S. population, including resilience to more transmissible variants. Funding: Various (see acknowledgments)
    • 

    corecore