5 research outputs found

    Evaluation of Urine CCA Assays for Detection of Schistosoma mansoni Infection in Western Kenya

    Get PDF
    Although accurate assessment of the prevalence of Schistosoma mansoni is important for the design and evaluation of control programs, the most widely used tools for diagnosis are limited by suboptimal sensitivity, slow turn-around-time, or inability to distinguish current from former infections. Recently, two tests that detect circulating cathodic antigen (CCA) in urine of patients with schistosomiasis became commercially available. As part of a larger study on schistosomiasis prevalence in young children, we evaluated the performance and diagnostic accuracy of these tests—the carbon test strip designed for use in the laboratory and the cassette format test intended for field use. In comparison to 6 Kato-Katz exams, the carbon and cassette CCA tests had sensitivities of 88.4% and 94.2% and specificities of 70.9% and 59.4%, respectively. However, because of the known limitations of the Kato-Katz assay, we also utilized latent class analysis (LCA) incorporating the CCA, Kato-Katz, and schistosome-specific antibody results to determine their sensitivities and specificities. The laboratory-based CCA test had a sensitivity of 91.7% and a specificity of 89.4% by LCA while the cassette test had a sensitivity of 96.3% and a specificity of 74.7%. The intensity of the reaction in both urine CCA tests reflected stool egg burden and their performance was not affected by the presence of soil transmitted helminth infections. Our results suggest that urine-based assays for CCA may be valuable in screening for S. mansoni infections

    Use of latent class models to accommodate inter-laboratory variation in assessing genetic polymorphisms associated with disease risk

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Researchers wanting to study the association of genetic factors with disease may encounter variability in the laboratory methods used to establish genotypes or other traits. Such variability leads to uncertainty in determining the strength of a genotype as a risk factor. This problem is illustrated using data from a case-control study of cervical cancer in which some subjects were independently assessed by different laboratories for the presence of a genetic polymorphism. Inter-laboratory agreement was only moderate, which led to a very wide range of empirical odds ratios (ORs) with the disease, depending on how disagreements were treated.</p> <p>This paper illustrates the use of latent class models (LCMs) and to estimate OR while taking laboratory accuracy into account. Possible LCMs are characterised in terms of the number of laboratory measurements available, and if their error rates are assumed to be differential or non-differential by disease status and/or laboratory.</p> <p>Results</p> <p>The LCM results give maximum likelihood estimates of laboratory accuracy rates and the OR of the genetic variable and disease, and avoid the ambiguities of the empirical results. Having allowed for possible measurement error in the expure, the LCM estimates of exposure – disease associations are typically stronger than their empirical equivalents. Also the LCM estimates exploit all the available data, and hence have relatively low standard errors.</p> <p>Conclusion</p> <p>Our approach provides a way to evaluate the association of a polymorphism with disease, while taking laboratory measurement error into account. Ambiguities in the empirical data arising from disagreements between laboratories are avoided, and the estimated polymorphism-disease association is typically enhanced.</p
    corecore