20 research outputs found

    Improving statistical inference on pathogen densities estimated by quantitative molecular methods: malaria gametocytaemia as a case study

    Get PDF
    BACKGROUND: Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. RESULTS: The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. CONCLUSIONS: Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens

    Genotypic Susceptibility Scores and HIV Type 1 RNA Responses in Treatment-Experienced Subjects with HIV Type 1 Infection

    No full text
    This study compared the role of genotypic susceptibility scores (GSS) as a predictor of virologic response in a group (n = 234) of HIV-infected, protease inhibitor (PI)-experienced subjects. Two scoring methods [discrete genotypic susceptibility score (dGSS) and continuous genotypic susceptibility score (cGSS)] were developed. Each drug in the subject's regimen was given a binary susceptibility score using Stanford inferred drug resistance scores to calculate the dGSS. In contrast to the dGSS, the cGSS model was designed to reflect partial susceptibility to a drug. Both GSS were independent predictors of week 16 virologic response. We also compared the GSS to a phenotypic susceptibility score (PSS) model on a subset of subjects that had both GSS and PSS performed, and found that both models were predictive of virologic response. Genotypic analyses at enrollment showed that subjects who were virologic nonresponders at week 16 revealed enrichment of several mutated codons associated with nucleoside reverse transcriptase inhibitors (NRTI) (codons 67, 69, 70, 118, 215, and 219) or PI resistance (codons 10, 24, 71, 73, and 88) compared to subjects who were virologic responders. Regression analyses revealed that protease mutations at codons 24 and 90 were most predictive of poor virologic response, whereas mutations at 82 were associated with enhanced virologic response. Certain NNRTI-associated mutations, such as K103N, were rapidly selected in the absence of NRTIs. These data indicate that GSS may be a useful tool in selecting drug regimens in HIV-1-infected subjects to maximize virologic response and improve treatment outcomes

    Dried Blood Spots for Viral Load Monitoring in Malawi: Feasible and Effective

    Get PDF
    To evaluate the feasibility and effectiveness of dried blood spots (DBS) use for viral load (VL) monitoring, describing patient outcomes and programmatic challenges that are relevant for DBS implementation in sub-Saharan Africa.We recruited adult antiretroviral therapy (ART) patients from five district hospitals in Malawi. Eligibility reflected anticipated Ministry of Health VL monitoring criteria. Testing was conducted at a central laboratory. Virological failure was defined as >5000 copies/ml. Primary outcomes were program feasibility (timely result availability and patient receipt) and effectiveness (second-line therapy initiation).We enrolled 1,498 participants; 5.9% were failing at baseline. Median time from enrollment to receipt of results was 42 days; 79.6% of participants received results within 3 months. Among participants with confirmed elevated VL, 92.6% initiated second-line therapy; 90.7% were switched within 365 days of VL testing. Nearly one-third (30.8%) of participants with elevated baseline VL had suppressed (4 years were more likely to be failing than participants on therapy 1-4 years (RR 1.7, 95% CI 1.0-2.8); older participants were less likely to be failing (RR 0.95, 95% CI 0.92-0.98). There was no difference in likelihood of failure based on clinical symptoms (RR 1.17, 95% CI 0.65-2.11).DBS for VL monitoring is feasible and effective in real-world clinical settings. Centralized DBS testing may increase access to VL monitoring in remote settings. Programmatic outcomes are encouraging, especially proportion of eligible participants switched to second-line therapy
    corecore