204 research outputs found

    Glucosylsphingosine Is a Highly Sensitive and Specific Biomarker for Primary Diagnostic and Follow-Up Monitoring in Gaucher Disease in a Non-Jewish, Caucasian Cohort of Gaucher Disease Patients

    Get PDF
    Gaucher disease (GD) is the most common lysosomal storage disorder (LSD). Based on a deficient β-glucocerebrosidase it leads to an accumulation of glucosylceramide. Standard diagnostic procedures include measurement of enzyme activity, genetic testing as well as analysis of chitotriosidase and CCL18/PARC as biomarkers. Even though chitotriosidase is the most well-established biomarker in GD, it is not specific for GD. Furthermore, it may be false negative in a significant percentage of GD patients due to mutation. Additionally, chitotriosidase reflects the changes in the course of the disease belatedly. This further enhances the need for a reliable biomarker, especially for the monitoring of the disease and the impact of potential treatments.Here, we evaluated the sensitivity and specificity of the previously reported biomarker Glucosylsphingosine with regard to different control groups (healthy control vs. GD carriers vs. other LSDs).Only GD patients displayed elevated levels of Glucosylsphingosine higher than 12 ng/ml whereas the comparison controls groups revealed concentrations below the pathological cut-off, verifying the specificity of Glucosylsphingosine as a biomarker for GD. In addition, we evaluated the biomarker before and during enzyme replacement therapy (ERT) in 19 patients, demonstrating a decrease in Glucosylsphingosine over time with the most pronounced reduction within the first 6 months of ERT. Furthermore, our data reveals a correlation between the medical consequence of specific mutations and Glucosylsphingosine.In summary, Glucosylsphingosine is a very promising, reliable and specific biomarker for GD

    What do we know and when do we know it?

    Get PDF
    Two essential aspects of virtual screening are considered: experimental design and performance metrics. In the design of any retrospective virtual screen, choices have to be made as to the purpose of the exercise. Is the goal to compare methods? Is the interest in a particular type of target or all targets? Are we simulating a ‘real-world’ setting, or teasing out distinguishing features of a method? What are the confidence limits for the results? What should be reported in a publication? In particular, what criteria should be used to decide between different performance metrics? Comparing the field of molecular modeling to other endeavors, such as medical statistics, criminology, or computer hardware evaluation indicates some clear directions. Taken together these suggest the modeling field has a long way to go to provide effective assessment of its approaches, either to itself or to a broader audience, but that there are no technical reasons why progress cannot be made

    Bias in trials comparing paired continuous tests can cause researchers to choose the wrong screening modality

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>To compare the diagnostic accuracy of two continuous screening tests, a common approach is to test the difference between the areas under the receiver operating characteristic (ROC) curves. After study participants are screened with both screening tests, the disease status is determined as accurately as possible, either by an invasive, sensitive and specific secondary test, or by a less invasive, but less sensitive approach. For most participants, disease status is approximated through the less sensitive approach. The invasive test must be limited to the fraction of the participants whose results on either or both screening tests exceed a threshold of suspicion, or who develop signs and symptoms of the disease after the initial screening tests.</p> <p>The limitations of this study design lead to a bias in the ROC curves we call <it>paired screening trial bias</it>. This bias reflects the synergistic effects of inappropriate reference standard bias, differential verification bias, and partial verification bias. The absence of a gold reference standard leads to inappropriate reference standard bias. When different reference standards are used to ascertain disease status, it creates differential verification bias. When only suspicious screening test scores trigger a sensitive and specific secondary test, the result is a form of partial verification bias.</p> <p>Methods</p> <p>For paired screening tests with bivariate normally distributed scores, we give formulae and programs to quantify the effect of <it>paired screening trial bias </it>on a paired comparison of area under the curves. We fix the prevalence of disease, and the chance a diseased subject manifests signs and symptoms. We derive the formulas for true sensitivity and specificity, and those for the sensitivity and specificity observed by the study investigator.</p> <p>Results</p> <p>The observed area under the ROC curves is quite different from the true area under the ROC curves. The typical direction of the bias is a strong inflation in sensitivity, paired with a concomitant slight deflation of specificity.</p> <p>Conclusion</p> <p>In paired trials of screening tests, when area under the ROC curve is used as the metric, bias may lead researchers to make the wrong decision as to which screening test is better.</p

    Investigating portable fluorescent microscopy (CyScope®) as an alternative rapid diagnostic test for malaria in children and women of child-bearing age

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Prompt and correct diagnosis of malaria is crucial for accurate epidemiological assessment and better case management, and while the gold standard of light microscopy is often available, it requires both expertise and time. Portable fluorescent microscopy using the CyScope<sup>® </sup>offers a potentially quicker, easier and more field-applicable alternative. This article reports on the strengths, limitations of this methodology and its diagnostic performance in cross-sectional surveys on young children and women of child-bearing age.</p> <p>Methods</p> <p>552 adults (99% women of child-bearing age) and 980 children (99% ≤ 5 years of age) from rural and peri-urban regions of Ugandan were examined for malaria using light microscopy (Giemsa-stain), a lateral-flow test (Paracheck-Pf<sup>®</sup>) and the CyScope<sup>®</sup>. Results from the surveys were used to calculate diagnostic performance (sensitivity and specificity) as well as to perform a receiver operating characteristics (ROC) analyses, using light microscopy as the gold-standard.</p> <p>Results</p> <p>Fluorescent microscopy (qualitative reads) showed reduced specificity (<40%), resulting in higher community prevalence levels than those reported by light microscopy, particularly in adults (+180% in adults and +20% in children). Diagnostic sensitivity was 92.1% in adults and 86.7% in children, with an area under the ROC curve of 0.63. Importantly, optimum performance was achieved for higher parasitaemia (>400 parasites/μL blood): sensitivity of 64.2% and specificity of 86.0%. Overall, the diagnostic performance of the CyScope was found inferior to that of Paracheck-Pf<sup>®</sup>.</p> <p>Discussion</p> <p>Fluorescent microscopy using the CyScope<sup>® </sup>is certainly a field-applicable and relatively affordable solution for malaria diagnoses especially in areas where electrical supplies may be lacking. While it is unlikely to miss higher parasitaemia, its application in cross-sectional community-based studies leads to many false positives (i.e. small fluorescent bodies of presently unknown origin mistaken as malaria parasites). Without recourse to other technologies, arbitration of these false positives is presently equivocal, which could ultimately lead to over-treatment; something that should be further explored in future investigations if the CyScope<sup>® </sup>is to be more widely implemented.</p

    A statistical framework to evaluate virtual screening

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Receiver operating characteristic (ROC) curve is widely used to evaluate virtual screening (VS) studies. However, the method fails to address the "early recognition" problem specific to VS. Although many other metrics, such as RIE, BEDROC, and pROC that emphasize "early recognition" have been proposed, there are no rigorous statistical guidelines for determining the thresholds and performing significance tests. Also no comparisons have been made between these metrics under a statistical framework to better understand their performances.</p> <p>Results</p> <p>We have proposed a statistical framework to evaluate VS studies by which the threshold to determine whether a ranking method is better than random ranking can be derived by bootstrap simulations and 2 ranking methods can be compared by permutation test. We found that different metrics emphasize "early recognition" differently. BEDROC and RIE are 2 statistically equivalent metrics. Our newly proposed metric SLR is superior to pROC. Through extensive simulations, we observed a "seesaw effect" – overemphasizing early recognition reduces the statistical power of a metric to detect true early recognitions.</p> <p>Conclusion</p> <p>The statistical framework developed and tested by us is applicable to any other metric as well, even if their exact distribution is unknown. Under this framework, a threshold can be easily selected according to a pre-specified type I error rate and statistical comparisons between 2 ranking methods becomes possible. The theoretical null distribution of SLR metric is available so that the threshold of SLR can be exactly determined without resorting to bootstrap simulations, which makes it easy to use in practical virtual screening studies.</p

    The Genetic Interpretation of Area under the ROC Curve in Genomic Profiling

    Get PDF
    Genome-wide association studies in human populations have facilitated the creation of genomic profiles which combine the effects of many associated genetic variants to predict risk of disease. The area under the receiver operator characteristic (ROC) curve is a well established measure for determining the efficacy of tests in correctly classifying diseased and non-diseased individuals. We use quantitative genetics theory to provide insight into the genetic interpretation of the area under the ROC curve (AUC) when the test classifier is a predictor of genetic risk. Even when the proportion of genetic variance explained by the test is 100%, there is a maximum value for AUC that depends on the genetic epidemiology of the disease, i.e. either the sibling recurrence risk or heritability and disease prevalence. We derive an equation relating maximum AUC to heritability and disease prevalence. The expression can be reversed to calculate the proportion of genetic variance explained given AUC, disease prevalence, and heritability. We use published estimates of disease prevalence and sibling recurrence risk for 17 complex genetic diseases to calculate the proportion of genetic variance that a test must explain to achieve AUC = 0.75; this varied from 0.10 to 0.74. We provide a genetic interpretation of AUC for use with predictors of genetic risk based on genomic profiles. We provide a strategy to estimate proportion of genetic variance explained on the liability scale from estimates of AUC, disease prevalence, and heritability (or sibling recurrence risk) available as an online calculator
    corecore