119 research outputs found

    Virus Sharing, Genetic Sequencing, and Global Health Security

    Get PDF
    The WHO’s Pandemic Influenza Preparedness (PIP) Framework was a milestone global agreement designed to promote the international sharing of biological samples to develop vaccines, while that ensuring poorer countries would have access to those vaccines. Since the PIP Framework was negotiated, scientists have developed the capacity to use genetic sequencing data (GSD) to develop synthetic viruses rapidly for product development of life-saving technologies in a time-sensitive global emergency—threatening to unravel the Framework. Access to GSD may also have major implications for biosecurity, biosafety, and intellectual property (IP). By rendering the physical transfer of viruses antiquated, GSD may also undermine the effectiveness of the PIP Framework itself, with disproportionate impacts on poorer countries. We examine the changes that need to be made to the PIP Framework to address the growing likelihood that GSD might be shared instead of physical virus samples. We also propose that the international community harness this opportunity to expand the scope of the PIP Framework beyond only influenza viruses with pandemic potential. In light of non-influenza pandemic threats such as the Middle East Respiratory Syndrome (MERS) and Ebola, we call for an international agreement on the sharing of the benefits of research – such as vaccines and treatments – for other infectious diseases to ensure not only a more secure and healthy world, but also a more just world, for humanity

    The precautionary principle in environmental science.

    Get PDF
    Environmental scientists play a key role in society's responses to environmental problems, and many of the studies they perform are intended ultimately to affect policy. The precautionary principle, proposed as a new guideline in environmental decision making, has four central components: taking preventive action in the face of uncertainty; shifting the burden of proof to the proponents of an activity; exploring a wide range of alternatives to possibly harmful actions; and increasing public participation in decision making. In this paper we examine the implications of the precautionary principle for environmental scientists, whose work often involves studying highly complex, poorly understood systems, while at the same time facing conflicting pressures from those who seek to balance economic growth and environmental protection. In this complicated and contested terrain, it is useful to examine the methodologies of science and to consider ways that, without compromising integrity and objectivity, research can be more or less helpful to those who would act with precaution. We argue that a shift to more precautionary policies creates opportunities and challenges for scientists to think differently about the ways they conduct studies and communicate results. There is a complicated feedback relation between the discoveries of science and the setting of policy. While maintaining their objectivity and focus on understanding the world, environmental scientists should be aware of the policy uses of their work and of their social responsibility to do science that protects human health and the environment. The precautionary principle highlights this tight, challenging linkage between science and policy

    Beyond Volume: The Impact of Complex Healthcare Data on the Machine Learning Pipeline

    Full text link
    From medical charts to national census, healthcare has traditionally operated under a paper-based paradigm. However, the past decade has marked a long and arduous transformation bringing healthcare into the digital age. Ranging from electronic health records, to digitized imaging and laboratory reports, to public health datasets, today, healthcare now generates an incredible amount of digital information. Such a wealth of data presents an exciting opportunity for integrated machine learning solutions to address problems across multiple facets of healthcare practice and administration. Unfortunately, the ability to derive accurate and informative insights requires more than the ability to execute machine learning models. Rather, a deeper understanding of the data on which the models are run is imperative for their success. While a significant effort has been undertaken to develop models able to process the volume of data obtained during the analysis of millions of digitalized patient records, it is important to remember that volume represents only one aspect of the data. In fact, drawing on data from an increasingly diverse set of sources, healthcare data presents an incredibly complex set of attributes that must be accounted for throughout the machine learning pipeline. This chapter focuses on highlighting such challenges, and is broken down into three distinct components, each representing a phase of the pipeline. We begin with attributes of the data accounted for during preprocessing, then move to considerations during model building, and end with challenges to the interpretation of model output. For each component, we present a discussion around data as it relates to the healthcare domain and offer insight into the challenges each may impose on the efficiency of machine learning techniques.Comment: Healthcare Informatics, Machine Learning, Knowledge Discovery: 20 Pages, 1 Figur

    A simulation study comparing aberration detection algorithms for syndromic surveillance

    Get PDF
    BACKGROUND: The usefulness of syndromic surveillance for early outbreak detection depends in part on effective statistical aberration detection. However, few published studies have compared different detection algorithms on identical data. In the largest simulation study conducted to date, we compared the performance of six aberration detection algorithms on simulated outbreaks superimposed on authentic syndromic surveillance data. METHODS: We compared three control-chart-based statistics, two exponential weighted moving averages, and a generalized linear model. We simulated 310 unique outbreak signals, and added these to actual daily counts of four syndromes monitored by Public Health – Seattle and King County's syndromic surveillance system. We compared the sensitivity of the six algorithms at detecting these simulated outbreaks at a fixed alert rate of 0.01. RESULTS: Stratified by baseline or by outbreak distribution, duration, or size, the generalized linear model was more sensitive than the other algorithms and detected 54% (95% CI = 52%–56%) of the simulated epidemics when run at an alert rate of 0.01. However, all of the algorithms had poor sensitivity, particularly for outbreaks that did not begin with a surge of cases. CONCLUSION: When tested on county-level data aggregated across age groups, these algorithms often did not perform well in detecting signals other than large, rapid increases in case counts relative to baseline levels

    A Hidden Markov Model for Analysis of Frontline Veterinary Data for Emerging Zoonotic Disease Surveillance

    Get PDF
    Surveillance systems tracking health patterns in animals have potential for early warning of infectious disease in humans, yet there are many challenges that remain before this can be realized. Specifically, there remains the challenge of detecting early warning signals for diseases that are not known or are not part of routine surveillance for named diseases. This paper reports on the development of a hidden Markov model for analysis of frontline veterinary sentinel surveillance data from Sri Lanka. Field veterinarians collected data on syndromes and diagnoses using mobile phones. A model for submission patterns accounts for both sentinel-related and disease-related variability. Models for commonly reported cattle diagnoses were estimated separately. Region-specific weekly average prevalence was estimated for each diagnoses and partitioned into normal and abnormal periods. Visualization of state probabilities was used to indicate areas and times of unusual disease prevalence. The analysis suggests that hidden Markov modelling is a useful approach for surveillance datasets from novel populations and/or having little historical baselines

    Transmission patterns of smallpox: systematic review of natural outbreaks in Europe and North America since World War II

    Get PDF
    BACKGROUND: Because smallpox (variola major) may be used as a biological weapon, we reviewed outbreaks in post-World War II Europe and North America in order to understand smallpox transmission patterns. METHODS: A systematic review was used to identify papers from the National Library of Medicine, Embase, Biosis, Cochrane Library, Defense Technical Information Center, WorldCat, and reference lists of included publications. Two authors reviewed selected papers for smallpox outbreaks. RESULTS: 51 relevant outbreaks were identified from 1,389 publications. The median for the effective first generation reproduction rate (initial R) was 2 (range 0–38). The majority outbreaks were small (less than 5 cases) and contained within one generation. Outbreaks with few hospitalized patients had low initial R values (median of 1) and were prolonged if not initially recognized (median of 3 generations); outbreaks with mostly hospitalized patients had higher initial R values (median 12) and were shorter (median of 3 generations). Index cases with an atypical presentation of smallpox were less likely to have been diagnosed with smallpox; outbreaks in which the index case was not correctly diagnosed were larger (median of 27.5 cases) and longer (median of 3 generations) compared to outbreaks in which the index case was correctly diagnosed (median of 3 cases and 1 generation). CONCLUSION: Patterns of spread during Smallpox outbreaks varied with circumstances, but early detection and implementation of control measures is a most important influence on the magnitude of outbreaks. The majority of outbreaks studied in Europe and North America were controlled within a few generations if detected early

    Variability in school closure decisions in response to 2009 H1N1: a qualitative systems improvement analysis

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>School closure was employed as a non-pharmaceutical intervention against pandemic 2009 H1N1, particularly during the first wave. More than 700 schools in the United States were closed. However, closure decisions reflected significant variation in rationales, decision triggers, and authority for closure. This variability presents the opportunity for improved efficiency and decision-making.</p> <p>Methods</p> <p>We identified media reports relating to school closure as a response to 2009 H1N1 by monitoring high-profile sources and searching Lexis-Nexis and Google news alerts, and reviewed reports for key themes. News stories were supplemented by observing conference calls and meetings with health department and school officials, and by discussions with decision-makers and community members.</p> <p>Results</p> <p>There was significant variation in the stated goal of closure decision, including limiting community spread of the virus, protecting particularly vulnerable students, and responding to staff shortages or student absenteeism. Because the goal of closure is relevant to its timing, nature, and duration, unclear rationales for closure can challenge its effectiveness. There was also significant variation in the decision-making authority to close schools in different jurisdictions, which, in some instances, was reflected in open disagreement between school and public health officials. Finally, decision-makers did not appear to expect the level of scientific uncertainty encountered early in the pandemic, and they often expressed significant frustration over changing CDC guidance.</p> <p>Conclusions</p> <p>The use of school closure as a public health response to epidemic disease can be improved by ensuring that officials clarify the goals of closure and tailor closure decisions to those goals. Additionally, authority to close schools should be clarified in advance, and decision-makers should expect to encounter uncertainty disease emergencies unfold and plan accordingly.</p

    Assessing time series models for forecasting international migration : lessons from the United Kingdom

    Get PDF
    Funding: This work was funded by the Migration Advisory Committee (MAC), UK Home Office, under the Home Office Science contract HOS/14/040, and also supported by the ESRC Centre for Population Change grant ES/K007394/1.Migration is one of the most unpredictable demographic processes. The aim of this article is to provide a blueprint for assessing various possible forecasting approaches in order to help safeguard producers and users of official migration statistics against misguided forecasts. To achieve that, we first evaluate the various existing approaches to modelling and forecasting of international migration flows. Subsequently, we present an empirical comparison of ex post performance of various forecasting methods, applied to international migration to and from the United Kingdom. The overarching goal is to assess the uncertainty of forecasts produced by using different forecasting methods, both in terms of their errors (biases) and calibration of uncertainty. The empirical assessment, comparing the results of various forecasting models against past migration estimates, confirms the intuition about weak predictability of migration, but also highlights varying levels of forecast errors for different migration streams. There is no single forecasting approach that would be well suited for different flows. We therefore recommend adopting a tailored approach to forecasts, and applying a risk management framework to their results, taking into account the levels of uncertainty of the individual flows, as well as the differences in their potential societal impact.Publisher PDFPeer reviewe
    corecore