150 research outputs found

    Advantages of the nested case-control design in diagnostic research

    Get PDF
    Abstract Background Despite its benefits, it is uncommon to apply the nested case-control design in diagnostic research. We aim to show advantages of this design for diagnostic accuracy studies. Methods We used data from a full cross-sectional diagnostic study comprising a cohort of 1295 consecutive patients who were selected on their suspicion of having deep vein thrombosis (DVT). We draw nested case-control samples from the full study population with case:control ratios of 1:1, 1:2, 1:3 and 1:4 (per ratio 100 samples were taken). We calculated diagnostic accuracy estimates for two tests that are used to detect DVT in clinical practice. Results Estimates of diagnostic accuracy in the nested case-control samples were very similar to those in the full study population. For example, for each case:control ratio, the positive predictive value of the D-dimer test was 0.30 in the full study population and 0.30 in the nested case-control samples (median of the 100 samples). As expected, variability of the estimates decreased with increasing sample size. Conclusion Our findings support the view that the nested case-control study is a valid and efficient design for diagnostic studies and should also be (re)appraised in current guidelines on diagnostic accuracy research.</p

    Diagnostic Accuracy of Molecular Amplification Tests for Human African Trypanosomiasis—Systematic Review

    Get PDF
    A range of molecular amplification techniques has been developed for the diagnosis of HAT, with polymerase chain reaction (PCR) at the forefront. As laboratory strengthening in endemic areas increases, it is expected that the applicability of molecular tests will increase. However, careful evaluation of these tests against the current reference standard, microscopy, must precede implementation. Therefore, we have investigated the published diagnostic accuracy of molecular amplification tests for HAT compared to microscopy for both initial diagnosis as well as for disease staging

    Approaching the diagnosis of growth-restricted neonates: a cohort study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The consequences of <it>in utero </it>growth restriction have been attracting scholarly attention for the past two decades. Nevertheless, the diagnosis of growth-restricted neonates is as yet an unresolved issue. Aim of this study is the evaluation of the performance of simple, common indicators of nutritional status, which are used in the identification of growth-restricted neonates.</p> <p>Methods</p> <p>In a cohort of 418 consecutively born term and near term neonates, four widely used anthropometric indices of body proportionality and subcutaneous fat accretion were applied, singly and in combination, as diagnostic markers for the detection of growth-restricted babies. The concordance of the indices was assessed in terms of positive and negative percent agreement and of Cohen's kappa.</p> <p>Results</p> <p>The agreement between the anthropometric indices was overall poor with a highest positive percent agreement of 62.5% and a lowest of 27.9% and the κ ranging between 0.19 and 0.58. Moreover, 6% to 32% of babies having abnormal values in just one index were apparently well-grown and the median birth weight centile of babies having abnormal values of either of two indices was found to be as high as the 46<sup>th </sup>centile for gestational age (95%CI 35.5 to 60.4 and 29.8 to 63.9, respectively). On the contrary, the combination of anthropometric indices appeared to have better distinguishing properties among apparently and not apparently well-grown babies. The median birth weight centile of babies having abnormal values in two (or more) indices was the 11<sup>th </sup>centile for gestational age (95%CI 6.3 to 16.3).</p> <p>Conclusions</p> <p>Clinical assessment and anthropometric indices in combination can define a reference standard with better performance compared to the same indices used in isolation. This approach offers an easy-to-use tool for bedside diagnosis of <it>in utero </it>growth restriction.</p

    The performance of the World Rugby Head Injury Assessment Screening Tool: a diagnostic accuracy study

    Get PDF
    Abstract Background Off-field screening tools, such as the Sports Concussion Assessment Tool (SCAT), have been recommended to identify possible concussion following a head impact where the consequences are unclear. However, real-life performance, and diagnostic accuracy of constituent sub-tests, have not been well characterized. Methods A retrospective cohort study was performed in elite Rugby Union competitions between September 2015 and June 2018. The study population comprised consecutive players identified with a head impact event undergoing off-field assessments with the World Rugby Head Injury Assessment (HIA01) screening tool, an abridged version of the SCAT3. Off-field screening performance was investigated by evaluating real-life removal-from-play outcomes and determining the theoretical diagnostic accuracy of the HIA01 tool, and individual sub-tests, if player-specific baseline or normative sub-test thresholds were strictly applied. The reference standard was clinically diagnosed concussion determined by serial medical assessments. Results One thousand one hundred eighteen head impacts events requiring off-field assessments were identified, resulting in 448 concussions. Real-life removal-from-play decisions demonstrated a sensitivity of 76.8% (95% CI 72.6–80.6) and a specificity of 86.6% (95% CI 83.7–89.1) for concussion (AUROC 0.82, 95% CI 0.79–0.84). Theoretical HIA01 tool performance worsened if pre-season baseline values (sensitivity 89.6%, specificity 33.9%, AUROC 0.62, p < 0.01) or normative thresholds (sensitivity 80.4%, specificity 69.0%, AUROC 0.75, p < 0.01) were strictly applied. Symptoms and clinical signs were the HIA01 screening tool sub-tests most predictive for concussion; with immediate memory and tandem gait providing little additional diagnostic value. Conclusions These findings support expert recommendations that clinical judgement should be used in the assessment of athletes following head impact events. Substitution of the tandem gait and 5-word immediate memory sub-tests with alternative modes could potentially improve screening tool performance

    First Isolation of Hepatitis E Virus Genotype 4 in Europe through Swine Surveillance in the Netherlands and Belgium

    Get PDF
    Hepatitis E virus (HEV) genotypes 3 and 4 are a cause of human hepatitis and swine are considered the main reservoir. To study the HEV prevalence and characterize circulating HEV strains, fecal samples from swine in the Netherlands and Belgium were tested by RT-PCR. HEV prevalence in swine was 7–15%. The Dutch strains were characterized as genotype 3, subgroups 3a, 3c and 3f, closely related to sequences found in humans and swine earlier. The HEV strains found in Belgium belonged to genotypes 3f and 4b. The HEV genotype 4 strain was the first ever reported in swine in Europe and an experimental infection in pigs was performed to isolate the virus. The genotype 4 strain readily infected piglets and caused fever and virus shedding. Since HEV4 infections have been reported to run a more severe clinical course in humans this observation may have public health implications

    Bias in the physical examination of patients with lumbar radiculopathy

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>No prior studies have examined systematic bias in the musculoskeletal physical examination. The objective of this study was to assess the effects of bias due to prior knowledge of lumbar spine magnetic resonance imaging findings (MRI) on perceived diagnostic accuracy of the physical examination for lumbar radiculopathy.</p> <p>Methods</p> <p>This was a cross-sectional comparison of the performance characteristics of the physical examination with blinding to MRI results (the 'independent group') with performance in the situation where the physical examination was not blinded to MRI results (the 'non-independent group'). The reference standard was the final diagnostic impression of nerve root impingement by the examining physician. Subjects were recruited from a hospital-based outpatient specialty spine clinic. All adults age 18 and older presenting with lower extremity radiating pain of duration ≤ 12 weeks were evaluated for participation. 154 consecutively recruited subjects with lumbar disk herniation confirmed by lumbar spine MRI were included in this study. Sensitivities and specificities with 95% confidence intervals were calculated in the independent and non-independent groups for the four components of the radiculopathy examination: 1) provocative testing, 2) motor strength testing, 3) pinprick sensory testing, and 4) deep tendon reflex testing.</p> <p>Results</p> <p>The perceived sensitivity of sensory testing was higher with prior knowledge of MRI results (20% vs. 36%; p = 0.05). Sensitivities and specificities for exam components otherwise showed no statistically significant differences between groups.</p> <p>Conclusions</p> <p>Prior knowledge of lumbar MRI results may introduce bias into the pinprick sensory testing component of the physical examination for lumbar radiculopathy. No statistically significant effect of bias was seen for other components of the physical examination. The effect of bias due to prior knowledge of lumbar MRI results should be considered when an isolated sensory deficit on examination is used in medical decision-making. Further studies of bias should include surgical clinic populations and other common diagnoses including shoulder, knee and hip pathology.</p

    Design Characteristics Influence Performance of Clinical Prediction Rules in Validation: A Meta-Epidemiological Study

    Get PDF
    BACKGROUND: Many new clinical prediction rules are derived and validated. But the design and reporting quality of clinical prediction research has been less than optimal. We aimed to assess whether design characteristics of validation studies were associated with the overestimation of clinical prediction rules' performance. We also aimed to evaluate whether validation studies clearly reported important methodological characteristics. METHODS: Electronic databases were searched for systematic reviews of clinical prediction rule studies published between 2006 and 2010. Data were extracted from the eligible validation studies included in the systematic reviews. A meta-analytic meta-epidemiological approach was used to assess the influence of design characteristics on predictive performance. From each validation study, it was assessed whether 7 design and 7 reporting characteristics were properly described. RESULTS: A total of 287 validation studies of clinical prediction rule were collected from 15 systematic reviews (31 meta-analyses). Validation studies using case-control design produced a summary diagnostic odds ratio (DOR) 2.2 times (95% CI: 1.2-4.3) larger than validation studies using cohort design and unclear design. When differential verification was used, the summary DOR was overestimated by twofold (95% CI: 1.2 -3.1) compared to complete, partial and unclear verification. The summary RDOR of validation studies with inadequate sample size was 1.9 (95% CI: 1.2 -3.1) compared to studies with adequate sample size. Study site, reliability, and clinical prediction rule was adequately described in 10.1%, 9.4%, and 7.0% of validation studies respectively. CONCLUSION: Validation studies with design shortcomings may overestimate the performance of clinical prediction rules. The quality of reporting among studies validating clinical prediction rules needs to be improved

    Sensitivity is not an intrinsic property of a diagnostic test: empirical evidence from histological diagnosis of Helicobacter pylori infection

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>We aimed to provide empirical evidence of how spectrum effects can affect the sensitivity of histological assessment of <it>Helicobacter pylori </it>infection, which may contribute to explain the heterogeneity in prevalence estimates across populations with expectedly similar prevalence.</p> <p>Methods</p> <p>Cross-sectional evaluation of dyspeptic subjects undergoing upper digestive endoscopy, including collection of biopsy specimens from the greater curvature of the antrum for assessment of <it>H. pylori </it>infection by histopathological study and polymerase chain reaction (PCR), from Portugal (n = 106) and Mozambique (n = 102) following the same standardized protocol.</p> <p>Results</p> <p>In the Portuguese sample the prevalence of infection was 95.3% by histological assessment and 98.1% by PCR. In the Mozambican sample the prevalence was 63.7% and 93.1%, respectively. Among those classified as infected by PCR, the sensitivity of histological assessment was 96.2% among the Portuguese and 66.3% among the Mozambican. Among those testing positive by both methods, 5.0% of the Portuguese and 20.6% of the Mozambican had mild density of colonization.</p> <p>Conclusions</p> <p>This study shows a lower sensitivity of histological assessment of <it>H. pylori </it>infection in Mozambican dyspeptic patients compared to the Portuguese, which may be explained by differences in the density of colonization, and may contribute to explain the heterogeneity in prevalence estimates across African settings.</p

    Comparing urine samples and cervical swabs for Chlamydia testing in a female population by means of Strand Displacement Assay (SDA)

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>There has been an increasing number of diagnosed cases of <it>Chlamydia trachomatis </it>in many countries, in particular among young people. The present study was based on a growing request to examine urine as a supplementary or primary specimen in screening for <it>Chlamydia trachomatis </it>in women, with the Becton Dickinson ProbeTec (BDPT) Strand Displacement Assay (SDA). Urine samples may be particularly important in screening young people who are asymptomatic.</p> <p>Methods</p> <p>A total of 603 women aged 15 and older were enrolled from the Sexually Transmitted Infection (STI) clinic at Haukeland University Hospital, Norway, in 2007. Only 31 women were older than 35 years. Cervical swabs and urine samples were tested with BDPT for all participants. In cases of discrepant test results from a given patient, both samples were retested by Cobas TaqManCT and a Polymerase Chain Reaction (PCR)-method (in-house). Prevalence of <it>C. trachomatis</it>, sensitivity, and specificity were estimated by latent class analysis using all test results available. Bootstrap BC confidence intervals (10 000 computations) were estimated for sensitivity and specificity, and their differences in cervix vs. urine tests.</p> <p>Results</p> <p>A total of 1809 specimens were collected from 603 patients. 80 women (13.4%) were positive for <it>C. trachomatis</it>. Among these, BDPT identified 72 and 73 as positive in cervix and urine samples, respectively. Of the 523 <it>C. trachomatis </it>negative women, BDPT identified 519 as negative based on cervical swabs, and 514 based on urine samples. Sensitivity for cervical swabs and urine samples with the BDPT were 89.0% (95% CI 78.8, 98.6) and 90.2% (95% CI 78.1, 95.5), respectively. The corresponding values for specificity were 99.2% (95% CI 98.3, 100) and 98.3% (95% CI 96.4, 100).</p> <p>Conclusions</p> <p>This study indicates that urine specimens are adequate for screening high-risk groups for <it>C. trachomatis </it>by the SDA method (BDPT). Such an approach may facilitate early detection and treatment of the target groups for screening, and be cost-effective for patients and the health services.</p
    • …
    corecore