15 research outputs found

    Spatio-temporal modelling of weekly malaria incidence in children under 5 for early epidemic detection in Mozambique

    Get PDF
    Malaria is a major cause of morbidity and mortality in Mozambique. We present a malaria early warning system (MEWS) for Mozambique informed by seven years of weekly case reports of malaria in children under 5 years of age from 142 districts. A spatio-temporal model was developed based on explanatory climatic variables to map exceedance probabilities, defined as the predictive probability that the relative risk of malaria incidence in a given district for a particular week will exceed a predefined threshold. Unlike most spatially discrete models, our approach accounts for the geographical extent of each district in the derivation of the spatial covariance structure to allow for changes in administrative boundaries over time. The MEWS can thus be used to predict areas that may experience increases in malaria transmission beyond expected levels, early enough so that prevention and response measures can be implemented prior to the onset of outbreaks. The framework we present is also applicable to other climate-sensitive diseases

    Validity and reliability of the South African Triage Scale in prehospital providers

    Get PDF
    Background The South African Triage Scale (SATS) is a validated in-hospital triage tool that has been innovatively adopted for use in the prehospital setting by Western Cape Government (WCG) Emergency Medical Services (EMS) in South Africa. The performance of SATS by EMS providers has not been formally assessed. The study sought to assess the validity and reliability of SATS when used by WCG EMS prehospital providers for single-patient triage. Methods This is a prospective, assessment-based validation study among WCG EMS providers from March to September 2017 in Cape Town, South Africa. Participants completed an assessment containing 50 clinical vignettes by calculating the three components — triage early warning score (TEWS), discriminators (pre-defined clinical conditions), and a final SATS triage color. Responses were scored against gold standard answers. Validity was assessed by calculating over- and under-triage rates compared to gold standard. Inter-rater reliability was assessed by calculating agreement among EMS providers’ responses. Results A total of 102 EMS providers completed the assessment. The final SATS triage color was accurately determined in 56.5%, under-triaged in 29.5%, and over-triaged in 13.1% of vignette responses. TEWS was calculated correctly in 42.6% of vignettes, under-calculated in 45.0% and over-calculated in 10.9%. Discriminators were correctly identified in only 58.8% of vignettes. There was substantial inter-rater and gold standard agreement for both the TEWS component and final SATS color, but there was lower inter-rater agreement for clinical discriminators. Conclusion This is the first assessment of SATS as used by EMS providers for prehospital triage. We found that SATS generally under-performed as a triage tool, mainly due to the clinical discriminators. We found good inter-rater reliability, but poor validity. The under-triage rate of 30% was higher than previous reports from the in-hospital setting. The over-triage rate of 13% was acceptable. Further clinically-based and qualitative studies are needed. Trial registration Not applicable

    A proposed framework for the systematic review and integrated assessment (SYRINA) of endocrine disrupting chemicals

    Get PDF
    Background - The issue of endocrine disrupting chemicals (EDCs) is receiving wide attention from both the scientific and regulatory communities. Recent analyses of the EDC literature have been criticized for failing to use transparent and objective approaches to draw conclusions about the strength of evidence linking EDC exposures to adverse health or environmental outcomes. Systematic review methodologies are ideal for addressing this issue as they provide transparent and consistent approaches to study selection and evaluation. Objective methods are needed for integrating the multiple streams of evidence (epidemiology, wildlife, laboratory animal, in vitro, and in silico data) that are relevant in assessing EDCs. Methods - We have developed a framework for the systematic review and integrated assessment (SYRINA) of EDC studies. The framework was designed for use with the International Program on Chemical Safety (IPCS) and World Health Organization (WHO) definition of an EDC, which requires appraisal of evidence regarding 1) association between exposure and an adverse effect, 2) association between exposure and endocrine disrupting activity, and 3) a plausible link between the adverse effect and the endocrine disrupting activity. Results - Building from existing methodologies for evaluating and synthesizing evidence, the SYRINA framework includes seven steps: 1) Formulate the problem; 2) Develop the review protocol; 3) Identify relevant evidence; 4) Evaluate evidence from individual studies; 5) Summarize and evaluate each stream of evidence; 6) Integrate evidence across all streams; 7) Draw conclusions, make recommendations, and evaluate uncertainties. The proposed method is tailored to the IPCS/WHO definition of an EDC but offers flexibility for use in the context of other definitions of EDCs. Conclusions - When using the SYRINA framework, the overall objective is to provide the evidence base needed to support decision making, including any action to avoid/minimise potential adverse effects of exposures. This framework allows for the evaluation and synthesis of evidence from multiple evidence streams. Finally, a decision regarding regulatory action is not only dependent on the strength of evidence, but also the consequences of action/inaction, e.g. limited or weak evidence may be sufficient to justify action if consequences are serious or irreversible.The workshops that supported the writing of this manuscript were funded by the Swedish Foundation for Strategic Environmental Research “Mistra”. LNV was funded by Award Number K22ES025811 from the National Institute of Environmental Health Sciences of the National Institutes of Health. TJW was funded by The Clarence Heller Foundation (A123547), the Passport Foundation, the Forsythia Foundation, the National Institute of Environmental Health Sciences (grants ES018135 and ESO22841), and U.S. EPA STAR grants (RD83467801 and RD83543301). JT was funded by the Academy of Finland and Sigrid Juselius. UH was funded by the Danish EPA. KAK was funded by the Canada Research Chairs program grant number 950–230607

    Neighbors-based prediction of physical function after total knee arthroplasty

    Get PDF
    The purpose of this study was to develop and test personalized predictions for functional recovery after Total Knee Arthroplasty (TKA) surgery, using a novel neighbors-based prediction approach. We used data from 397 patients with TKA to develop the prediction methodology and then tested the predictions in a temporally distinct sample of 202 patients. The Timed Up and Go (TUG) Test was used to assess physical function. Neighbors-based predictions were generated by estimating an index patient’s prognosis from the observed recovery data of previous similar patients (a.k.a., the index patient’s “matches”). Matches were determined by an adaptation of predictive mean matching. Matching characteristics included preoperative TUG time, age, sex and Body Mass Index. The optimal number of matches was determined to be m = 35, based on low bias (− 0.005 standard deviations), accurate coverage (50% of the realized observations within the 50% prediction interval), and acceptable precision (the average width of the 50% prediction interval was 2.33 s). Predictions were well-calibrated in out-of-sample testing. These predictions have the potential to inform care decisions both prior to and following TKA surgery

    Differences in the Epstein-Barr Virus gp350 IgA Antibody Response Are Associated With Increased Risk for Coinfection With a Second Strain of Epstein-Barr Virus

    No full text
    BACKGROUND: The Epstein-Barr virus (EBV) viral glycoprotein gp350 has been proposed as a candidate antigen for an EBV vaccine. However, the proposed formulations of these vaccines have not taken into account the presence of 2 unique EBV strains (EBV-1 and EBV-2) present in areas of high incidence of the EBV-associated cancer, Burkitt lymphoma. METHODS: In this study, we analyze the kinetics of EBV-1 and EBV-2 infection in an asymptomatic infant cohort from Kisumu, Kenya. We also analyzed the kinetics of the antibody response against 5 EBV antigens, gp350 (IgG and IgA), VCA (IgG), EBNA-1 (IgG), EAd (IgG), and Zta (IgG). RESULTS: We observed a high frequency of coinfection with both EBV types over time, with the only observable defect in the antibody response in infants coinfected being a significantly lower level of anti-gp350 IgA at peak response. Gp350 IgA levels were also significantly lower in coinfected infants 2.5 months postinfection and at the time of coinfection. CONCLUSIONS: These results suggest that anti-gp350 IgA antibodies may be important for sterilizing immunity against secondary infection. These findings have implications for the development of an efficacious EBV vaccine to prevent both EBV-1 and EBV-2 infection in a population at high risk for Burkitt lymphoma

    Comparing neoadjuvant chemotherapy with or without radiation therapy for pancreatic ductal adenocarcinoma: National Cancer Database cohort analysis

    No full text
    Background: Neoadjuvant treatment is important for improving the rate of R0 surgical resection and overall survival outcome in treating patients with pancreatic ductal adenocarcinoma (PDAC). However, the true efficacy of radiotherapy (RT) for neoadjuvant treatment of PDAC is uncertain. This retrospective study evaluated the treatment outcome of neoadjuvant RT in the treatment of PDAC. Methods: Collected from the National Cancer Database, information on patients with PDAC who underwent neoadjuvant chemotherapy (NAC) and pancreatectomy between 2010 to 2016 was used in this study. Short- and long-term outcomes were compared between patients who received neoadjuvant chemoradiotherapy (NACRT) and NAC. Results: The study included 6936 patients, of whom 3185 received NACRT and 3751 NAC. The groups showed no difference in overall survival (NACRT 16.1 months versus NAC 17.4 months; P = 0.054). NACRT is associated with more frequent margin negative resection (86.1 versus 80.0 per cent; P < 0.001) but a more unfavourable 90-day mortality than NAC (6.4 versus 3.6 per cent; P < 0.001). The odds of 90-day mortality were higher in the radiotherapy group (odds ratio 1.81; P < 0.001), even after adjusting for significant covariates. Patients who received NACRT received single-agent chemotherapy more often than those who received NAC (31.5 versus 10.7 per cent; P < 0.001). Conclusion: This study failed to show a survival benefit for NACRT over NAC alone, despite its association with negative margin resection. The significantly higher mortality in NACRT warrants further investigation into its efficacy in the treatment of pancreatic cancer

    Neighbors-based prediction of physical function after total knee arthroplasty

    No full text
    The purpose of this study was to develop and test personalized predictions for functional recovery after Total Knee Arthroplasty (TKA) surgery, using a novel neighbors-based prediction approach. We used data from 397 patients with TKA to develop the prediction methodology and then tested the predictions in a temporally distinct sample of 202 patients. The Timed Up and Go (TUG) Test was used to assess physical function. Neighbors-based predictions were generated by estimating an index patient’s prognosis from the observed recovery data of previous similar patients (a.k.a., the index patient’s “matches”). Matches were determined by an adaptation of predictive mean matching. Matching characteristics included preoperative TUG time, age, sex and Body Mass Index. The optimal number of matches was determined to be m = 35, based on low bias (− 0.005 standard deviations), accurate coverage (50% of the realized observations within the 50% prediction interval), and acceptable precision (the average width of the 50% prediction interval was 2.33 s). Predictions were well-calibrated in out-of-sample testing. These predictions have the potential to inform care decisions both prior to and following TKA surgery

    A high force of Plasmodium vivax blood-stage infection drives the rapid acquisition of immunity in Papua New Guinean children

    Get PDF
    When both parasite species are co-endemic, Plasmodium vivax incidence peaks in younger children compared to P. falciparum. To identify differences in the number of blood stage infections of these species and its potential link to acquisition of immunity, we have estimated the molecular force of blood-stage infection of P. vivax (molFOB, i.e. the number of genetically distinct blood-stage infections over time), and compared it to previously reported values for P. falciparum.; P. vivax molFOB was estimated by high resolution genotyping parasites in samples collected over 16 months in a cohort of 264 Papua New Guinean children living in an area highly endemic for P. falciparum and P. vivax. In this cohort, P. vivax episodes decreased three-fold over the age range of 1-4.5 years.; On average, children acquired 14.0 new P. vivax blood-stage clones/child/year-at-risk. While the incidence of clinical P. vivax illness was strongly associated with mol FOB (incidence rate ratio (IRR) = 1.99, 95% confidence interval (CI95) [1.80, 2.19]), molFOB did not change with age. The incidence of P. vivax showed a faster decrease with age in children with high (IRR = 0.49, CI95 [0.38, 0.64] p>0.001) compared to those with low exposure (IRR = 0.63, CI95[0.43, 0.93] p = 0.02).; P. vivax molFOB is considerably higher than P. falciparum molFOB (5.5 clones/child/year-at-risk). The high number of P. vivax clones that infect children in early childhood contribute to the rapid acquisition of immunity against clinical P. vivax malaria

    Ductal Dilatation of ≥5 mm in Intraductal Papillary Mucinous Neoplasm Should Trigger the Consideration for Pancreatectomy : A Meta-Analysis and Systematic Review of Resected Cases

    No full text
    Intraductal papillary mucinous neoplasms (IPMN) are common but difficult to manage since accurate tools for diagnosing malignancy are unavailable. This study tests the diagnostic value of the main pancreatic duct (MPD) diameter for detecting IPMN malignancy using a meta-analysis of published data of resected IPMNs. Collected from a comprehensive literature search, the articles included in this analysis must report malignancy cases (high-grade dysplasia (HGD) and invasive carcinoma (IC)) and MPD diameter so that two MPD cut-offs could be created. The sensitivity, specificity, and odds ratios of the two cutoffs for predicting malignancy were calculated. A review of 1493 articles yielded 20 retrospective studies with 3982 resected cases. A cutoff of ≥5 mm is more sensitive than the ≥10 mm cutoff and has pooled sensitivity of 72.20% and 75.60% for classification of HGD and IC, respectively. Both MPD cutoffs of ≥5 mm and ≥10 mm were associated with malignancy (OR = 4.36 (95% CI: 2.82, 6.75) vs. OR = 3.18 (95% CI: 2.25, 4.49), respectively). The odds of HGD and IC for patients with MPD ≥5 mm were 5.66 (95% CI: 3.02, 10.62) and 7.40 (95% CI: 4.95, 11.06), respectively. OR of HGD and IC for MPD ≥10 mm cutoff were 4.36 (95% CI: 3.20, 5.93) and 4.75 (95% CI: 2.39, 9.45), respectively. IPMN with MPD of &gt;5 mm could very likely be malignant. In selected IPMN patients, pancreatectomy should be considered when MPD is &gt;5 mm
    corecore