922 research outputs found

    Public health interventions to control the spread of a directly transmitted human pathogen within and between Hong Kong and Guangzhou.

    Get PDF
    The ability to detect and differentiate between fast and slow spatial spread of infectious disease depends on the density of the surveillance network. 2. The results of this study suggest that more concentrated surveillance networks are required in Guangzhou compared with other regions, such as Thailand and Europe, as long-distance travel is less frequent.published_or_final_versio

    Reducing the impact of the next influenza pandemic using household-based public health interventions.

    Get PDF
    Household-based public health interventions can effectively mitigate the impact of influenza pandemic, and the resources and compliance requirement are realistic and feasible.published_or_final_versio

    Statistical algorithms for early detection of the annual influenza peak season in Hong Kong using sentinel surveillance data

    Get PDF
    published_or_final_versio

    Transmission of Japanese encephalitis virus in Hong Kong

    Get PDF
    1. Pigs are likely to be the main amplifying host for Japanese encephalitis virus. 2. The success of a swine vaccination programme depends on the timing of the loss of maternal antibody protection and seasonal dynamics of the vector population. 3. Vaccination may be ineffective in the face of strong natural infection because of the variability in timing of the loss of maternal antibody protection.4. Evidence in support of swine vaccination as a human health intervention was not found.published_or_final_versio

    Methods for monitoring influenza surveillance data

    Get PDF
    Background: A variety of Serfling-type statistical algorithms requiring long series of historical data, exclusively from temperate climate zones, have been proposed for automated monitoring of influenza sentinel surveillance data. We evaluated three alternative statistical approaches where alert thresholds are based on recent data in both temperate and subtropical regions. Methods: We compared time series, regression, and cumulative sum (CUSUM) models on empirical data from Hong Kong and the US using a composite index (range = 0-1) consisting of the key outcomes of sensitivity, specificity, and time to detection (lag). The index was calculated based on alarms generated within the first 2 or 4 weeks of the peak season. Results: We found that the time series model was optimal in the Hong Kong setting, while both the time series and CUSUM models worked equally well on US data. For alarms generated within the first 2 weeks (4 weeks) of the peak season in Hong Kong, the maximum values of the index were: time series 0.77 (0.86); regression 0.75 (0.82); CUSUM 0.56 (0.75). In the US data the maximum values of the index were: time series 0.81 (0.95); regression 0.81 (0.91); CUSUM 0.90 (0.94). Conclusions: Automated influenza surveillance methods based on short-term data, including time series and CUSUM models, can generate sensitive, specific, and timely alerts, and can offer a useful alternative to Serfling-like methods that rely on long-term, historically based thresholds. © Copyright 2006 Oxford University Press.postprin

    Viral evolution from one generation of human influenza infection to the next

    Get PDF
    published_or_final_versio

    Viral shedding, clinical history and transmission of influenza

    Get PDF
    published_or_final_versio

    Individual participant data validation of the PICNICC prediction model for febrile neutropenia

    Get PDF
    BACKGROUND: Risk-stratified approaches to managing cancer therapies and their consequent complications rely on accurate predictions to work effectively. The risk-stratified management of fever with neutropenia is one such very common area of management in paediatric practice. Such rules are frequently produced and promoted without adequate confirmation of their accuracy. METHODS: An individual participant data meta-analytic validation of the 'Predicting Infectious ComplicatioNs In Children with Cancer' (PICNICC) prediction model for microbiologically documented infection in paediatric fever with neutropenia was undertaken. Pooled estimates were produced using random-effects meta-analysis of the area under the curve-receiver operating characteristic curve (AUC-ROC), calibration slope and ratios of expected versus observed cases (E/O). RESULTS: The PICNICC model was poorly predictive of microbiologically documented infection (MDI) in these validation cohorts. The pooled AUC-ROC was 0.59, 95% CI 0.41 to 0.78, tau2=0, compared with derivation value of 0.72, 95% CI 0.71 to 0.76. There was poor discrimination (pooled slope estimate 0.03, 95% CI -0.19 to 0.26) and calibration in the large (pooled E/O ratio 1.48, 95% CI 0.87 to 2.1). Three different simple recalibration approaches failed to improve performance meaningfully. CONCLUSION: This meta-analysis shows the PICNICC model should not be used at admission to predict MDI. Further work should focus on validating alternative prediction models. Validation across multiple cohorts from diverse locations is essential before widespread clinical adoption of such rules to avoid overtreating or undertreating children with fever with neutropenia

    Transparent reporting of multivariable prediction models developed or validated using clustered data (TRIPOD-Cluster): explanation and elaboration

    Get PDF
    The TRIPOD-Cluster (transparent reporting of multivariable prediction models developed or validated using clustered data) statement comprises a 19 item checklist, which aims to improve the reporting of studies developing or validating a prediction model in clustered data, such as individual participant data meta-analyses (clustering by study) and electronic health records (clustering by practice or hospital). This explanation and elaboration document describes the rationale; clarifies the meaning of each item; and discusses why transparent reporting is important, with a view to assessing risk of bias and clinical usefulness of the prediction model. Each checklist item of the TRIPOD-Cluster statement is explained in detail and accompanied by published examples of good reporting. The document also serves as a reference of factors to consider when designing, conducting, and analysing prediction model development or validation studies in clustered data. To aid the editorial process and help peer reviewers and, ultimately, readers and systematic reviewers of prediction model studies, authors are recommended to include a completed checklist in their submission

    Transparent reporting of multivariable prediction models developed or validated using clustered data: TRIPOD-Cluster checklist

    Get PDF
    The increasing availability of large combined datasets (or big data), such as those from electronic health records and from individual participant data meta-analyses, provides new opportunities and challenges for researchers developing and validating (including updating) prediction models. These datasets typically include individuals from multiple clusters (such as multiple centres, geographical locations, or different studies). Accounting for clustering is important to avoid misleading conclusions and enables researchers to explore heterogeneity in prediction model performance across multiple centres, regions, or countries, to better tailor or match them to these different clusters, and thus to develop prediction models that are more generalisable. However, this requires prediction model researchers to adopt more specific design, analysis, and reporting methods than standard prediction model studies that do not have any inherent substantial clustering. Therefore, prediction model studies based on clustered data need to be reported differently so that readers can appraise the study methods and findings, further increasing the use and implementation of such prediction models developed or validated from clustered datasets
    • …
    corecore