285 research outputs found

    A novel, nondestructive, dried blood spot-based hematocrit prediction method using noncontact diffuse reflectance spectroscopy

    Get PDF
    Dried blood spot (DBS) sampling is recognized as a valuable alternative sampling strategy both in research and in clinical routine. Although many advantages are associated with DBS sampling, its more widespread use is hampered by several issues, of which the hematocrit effect on DBS-based quantitation remains undoubtedly the most widely discussed one. Previously, we developed a method to derive the approximate hematocrit from a nonvolumetrically applied DBS based on its potassium content. Although this method yielded good results and was straightforward to perform, it was also destructive and required sample preparation. Therefore, we now developed a nondestructive method which allows to predict the hematocrit of a DBS based on its hemoglobin content, measured via noncontact diffuse reflectance spectroscopy. The developed method was thoroughly validated. A linear calibration curve was established after log/log transformation. The bias, intraday and interday imprecision of quality controls at three hematocrit levels and at the lower and upper limit of quantitation (0.20 and 0.67, respectively) were less than 11%. In addition, the influence of storage and the volume spotted was evaluated, as well as DBS homogeneity. Application of the method to venous DBSs prepared from whole blood patient samples (n = 233) revealed a good correlation between the actual and the predicted hematocrit. Limits of agreement obtained after Bland and Altman analysis were -0.076 and. +0.018. Incurred sample reanalysis demonstrated good method reproducibility. In conclusion, mere scanning of a DBS suffices to derive its approximate hematocrit, one of the most important variables in DBS analysis

    Blurred digital mammography images : an analysis of technical recall and observer detection performance

    Get PDF
    Background: Blurred images in Full Field Digital Mammography (FFDM) are a problem in the UK Breast Screening Programme. Technical recalls may be due to blurring not being seen on lower resolution monitors used for review. Objectives: This study assesses the visual detection of blurring on a 2.3 megapixel (MP) monitor and a 5 MP report grade monitor and proposes an observer standard for the visual detection of blurring on a 5 MP reporting grade monitor. Method: Twenty-eight observers assessed 120 images for blurring; 20 had no blurring present whilst 100 had blurring imposed through mathematical simulation at 0.2, 0.4, 0.6, 0.8 and 1.0 mm levels of motion. Technical recall rate for both monitors and angular size at each level of motion were calculated. Chi-squared (X2) tests were used to test whether significant differences in blurring detection existed between 2.3 and 5 MP monitors. Results: The technical recall rate for 2.3 and 5 MP monitors are 20.3 % and 9.1% respectively. Angular size for 0.2 to 1 mm motion varied from 55 to 275 arc seconds. The minimum amount of motion for visual detection of blurring in this study is 0.4 mm. For 0.2 mm simulated motion, there was no significant difference X2 (1, N=1095) =1.61, p=0.20) in blurring detection between the 2.3 and 5 MP monitors. Conclusion: According to this study monitors equal or below 2.3 MP are not suitable for technical review of FFDM images for the detection of blur. Advances in knowledge: This research proposes the first observer standard for the visual detection of blurring. Key words: Simulated motion; technical recall; monitor resolution; observer standard; blurring detectio

    Development and comparison of a real-time PCR assay for detection of Dichelobacter nodosus with culturing and conventional PCR: harmonisation between three laboratories

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Ovine footrot is a contagious disease with worldwide occurrence in sheep. The main causative agent is the fastidious bacterium <it>Dichelobacter nodosus</it>. In Scandinavia, footrot was first diagnosed in Sweden in 2004 and later also in Norway and Denmark. Clinical examination of sheep feet is fundamental to diagnosis of footrot, but <it>D. nodosu</it>s should also be detected to confirm the diagnosis. PCR-based detection using conventional PCR has been used at our institutes, but the method was laborious and there was a need for a faster, easier-to-interpret method. The aim of this study was to develop a TaqMan-based real-time PCR assay for detection of <it>D. nodosus </it>and to compare its performance with culturing and conventional PCR.</p> <p>Methods</p> <p>A <it>D. nodosus-</it>specific TaqMan based real-time PCR assay targeting the 16S rRNA gene was designed. The inclusivity and exclusivity (specificity) of the assay was tested using 55 bacterial and two fungal strains. To evaluate the sensitivity and harmonisation of results between different laboratories, aliquots of a single DNA preparation were analysed at three Scandinavian laboratories. The developed real-time PCR assay was compared to culturing by analysing 126 samples, and to a conventional PCR method by analysing 224 samples. A selection of PCR-products was cloned and sequenced in order to verify that they had been identified correctly.</p> <p>Results</p> <p>The developed assay had a detection limit of 3.9 fg of <it>D. nodosus </it>genomic DNA. This result was obtained at all three laboratories and corresponds to approximately three copies of the <it>D. nodosus </it>genome per reaction. The assay showed 100% inclusivity and 100% exclusivity for the strains tested. The real-time PCR assay found 54.8% more positive samples than by culturing and 8% more than conventional PCR.</p> <p>Conclusions</p> <p>The developed real-time PCR assay has good specificity and sensitivity for detection of <it>D. nodosus</it>, and the results are easy to interpret. The method is less time-consuming than either culturing or conventional PCR.</p

    Cocaine in surface waters: a new evidence-based tool to monitor community drug abuse

    Get PDF
    BACKGROUND: Cocaine use seems to be increasing in some urban areas worldwide, but it is not straightforward to determine the real extent of this phenomenon. Trends in drug abuse are currently estimated indirectly, mainly by large-scale social, medical, and crime statistics that may be biased or too generic. We thus tested a more direct approach based on 'field' evidence of cocaine use by the general population. METHODS: Cocaine and its main urinary metabolite (benzoylecgonine, BE) were measured by mass spectrometry in water samples collected from the River Po and urban waste water treatment plants of medium-size Italian cities. Drug concentration, water flow rate, and population at each site were used to estimate local cocaine consumption. RESULTS: We showed that cocaine and BE are present, and measurable, in surface waters of populated areas. The largest Italian river, the Po, with a five-million people catchment basin, steadily carried the equivalent of about 4 kg cocaine per day. This would imply an average daily use of at least 27 ± 5 doses (100 mg each) for every 1000 young adults, an estimate that greatly exceeds official national figures. Data from waste water treatment plants serving medium-size Italian cities were consistent with this figure. CONCLUSION: This paper shows for the first time that an illicit drug, cocaine, is present in the aquatic environment, namely untreated urban waste water and a major river. We used environmental cocaine levels for estimating collective consumption of the drug, an approach with the unique potential ability to monitor local drug abuse trends in real time, while preserving the anonymity of individuals. The method tested here – in principle extendable to other drugs of abuse – might be further refined to become a standardized, objective tool for monitoring drug abuse

    Detection of Central Visual Field Defects in Early Glaucomatous Eyes: comparison of Humphrey and Octopus perimetry

    Get PDF
    Purpose: To compare the detection rate of central visual field defect (CVFD) between the 30-degree Octopus G1 program (Dynamic strategy) and the HFA 10–2 SITA-Standard test in early glaucoma eyes not showing any CVFD on the HFA 24–2 SITA-Standard test. Methods: One eye of 41 early glaucoma patients without CVFD in the central 10 on HFA 24–2 test was tested with both the HFA 10–2 test and the Octopus G1 program 15 minutes apart, in random order. The primary outcome measure was the comparison of CVFD detection rates. Secondary outcome measures comprised the agreement in detecting CVFD, and the comparison of test durations and the numbers of depressed test points outside the central 10-degree area between the HFA 24–2 test and the Octopus G1 program. Results: The mean age of the population was 65.2±10.1 years, and the mean deviation with HFA 24–2 was -3.26±2.6 dB. The mean test duration was not significantly different between the tests (p = 0.13). A CVFD was present in 33 (80.4%) HFA 10–2 test and in 23 (56.0%) Octopus G1 tests (p = 0.002). The overall agreement between the HFA 10–2 and Octopus G1 examinations in classifying eyes as having or not having CVFD was moderate (Cohen’s kappa 0.47). The Octopus G1 program showed 69.6% sensitivity and 100% specificity to detect CVFD in eyes where the HFA 10–2 test revealed a CVFD. The number of depressed test points (p<5%) outside the central 10 area detected with the Octopus G1 program (19.68±10.6) was significantly higher than that detected with the HFA 24–2 program (11.95±5.5, p<0.001). Conclusion: Both HFA 10–2 and Octopus G1programs showed CVFD not present at HFA 24–2 test although the agreement was moderate. The use of a single Octopus G1 examination may represent a practical compromise for the assessment of both central and peripheral visual field up to 30 eccentricity without any additional testing and increasing the total investigation time

    The equity dimension in evaluations of the quality and outcomes framework: A systematic review

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Pay-for-performance systems raise concerns regarding inequity in health care because providers might select patients for whom targets can easily be reached. This paper aims to describe the evolution of pre-existing (in)equity in health care in the period after the introduction of the Quality and Outcomes Framework (QOF) in the UK and to describe (in)equities in exception reporting. In this evaluation, a theory-based framework conceptualising equity in terms of equal access, equal treatment and equal treatment outcomes for people in equal need is used to guide the work.</p> <p>Methods</p> <p>A systematic MEDLINE and Econlit search identified 317 studies. Of these, 290 were excluded because they were not related to the evaluation of QOF, they lacked an equity dimension in the evaluation, their qualitative research focused on experiences or on the nature of the consultation, or unsuitable methodology was used to pronounce upon equity after the introduction of QOF.</p> <p>Results</p> <p>None of the publications (n = 27) assessed equity in access to health care. Concerning equity in treatment and (intermediate) treatment outcomes, overall quality scores generally improved. For the majority of the observed indicators, all citizens benefit from this improvement, yet the extent to which different patient groups benefit tends to vary and to be highly dependent on the type and complexity of the indicator(s) under study, the observed patient group(s) and the characteristics of the study. In general, the introduction of QOF was favourable for the aged and for males. Total QOF scores did not seem to vary according to ethnicity. For deprivation, small but significant residual differences were observed after the introduction of QOF favouring less deprived groups. These differences are mainly due to differences at the practice level. The variance in exception reporting according to gender and socio-economic position is low.</p> <p>Conclusions</p> <p>Although QOF seems not to be socially selective at first glance, this does not mean QOF does not contribute to the inverse care law. Introducing different targets for specific patient groups and including appropriate, non-disease specific and patient-centred indicators that grasp the complexity of primary care might refine the equity dimension of the evaluation of QOF. Also, information on the actual uptake of care, information at the patient level and monitoring of individuals' health care utilisation tracks could make large contributions to an in-depth evaluation. Finally, evaluating pay-for-quality initiatives in a broader health systems impact assessment strategy with equity as a full assessment criterion is of utmost importance.</p

    High Affinity Antibodies to Plasmodium falciparum Merozoite Antigens Are Associated with Protection from Malaria

    Get PDF
    Malaria kills almost 1 million people every year, but the mechanisms behind protective immunity against the disease are still largely unknown. In this study, surface plasmon resonance technology was used to evaluate the affinity (measured as k(d)) of naturally acquired antibodies to the Plasmodium falciparum antigens MSP2 and AMA1. Antibodies in serum samples from residents in endemic areas bound with higher affinities to AMA1 than to MSP2, and with higher affinities to the 3D7 allele of MSP2-3D7 than to the FC27 allele. The affinities against AMA1 and MSP2-3D7 increased with age, and were usually within similar range as the affinities for the monoclonal antibodies also examined in this study. The finding of MSP2-3D7 type parasites in the blood was associated with a tendency for higher affinity antibodies to both forms of MSP2 and AMA1, but this was significant only when analyzing antibodies against MSP2-FC27, and individuals infected with both allelic forms of MSP2 at the same time showed the highest affinities. Individuals with the highest antibody affinities for MSP2-3D7 at baseline had a prolonged time to clinical malaria during 40 weeks of follow-up, and among individuals who were parasite positive at baseline higher antibody affinities to all antigens were seen in the individuals that did not experience febrile malaria during follow up. This study contributes important information for understanding how immunity against malaria arises. The findings suggest that antibody affinity plays an important role in protection against disease, and differs between antigens. In light of this information, antibody affinity measurements would be a key assessment in future evaluation of malaria vaccine formulations
    • …
    corecore