41 research outputs found

    The logic of equivalence testing and its use in laboratory medicine

    Get PDF
    Hypothesis testing is a methodological paradigm widely popularized outside the field of pure statistics, and nowadays more or less familiar to the largest part of biomedical researchers. Conversely, the equivalence testing is still somehow obscure and misunderstood, although it represents a conceptual mainstay for some biomedical fields like pharmacology. In order to appreciate the way it could suit laboratory medicine, it is necessary to understand the philosophy behind it, and in turn how it stemmed and differentiated along the history of classical hypothesis testing. Here we present the framework of equivalence testing, the various tests used to assess equivalence and discuss their applicability to laboratory medicine research and issues

    Blood alcohol concentration in the clinical laboratory: a narrative review of the preanalytical phase in diagnostic and forensic testing

    Get PDF
    The analysis of blood alcohol concentration (BAC), a pivotal toxicological test, concerns acute alcohol intoxication (AAI) and driving under the influence (DUI). As such, BAC presents an organizational challenge for clinical laboratories, with unique complexities due to the need for forensic defensibility as part of the diagnostic process. Unfortunately, a significant number of scientific investigations dealing with the subject present discrepancies that make it difficult to identify optimal practices in sample collection, transportation, handling, and preparation. This review provides a systematic analysis of the preanalytical phase of BAC that aims to identify and explain the chemical, physiological, and pharmacological mechanisms underlying controllable operational factors. Nevertheless, it seeks evidence for the necessity to separate preanalytical processes for diagnostic and forensic BAC testing. In this regard, the main finding of this review is that no literature evidence supports the necessity to differentiate preanalytical procedures for AAI and DUI, except for the traceability throughout the chain of custody. In fact, adhering to correct preanalytical procedures provided by official bodies such as European federation of clinical chemistry and laboratory medicine for routine phlebotomy ensures both diagnostic accuracy and forensic defensibility of BAC. This is shown to depend on the capability of modern pre-evacuated sterile collection tubes to control major factors influencing BAC, namely non-enzymatic oxidation and microbial contamination. While certain restrictions become obsolete with such devices, as the use of sodium fluoride (NaF) for specific preservation of forensic BAC, this review reinforces the recommendation to use non-alcoholic disinfectants as a means to achieve “error-proof” procedures in challenging operational environments like the emergency department

    Understanding the effect size and its measures.

    Get PDF
    The evidence based medicine paradigm demands scientific reliability, but modern research seems to overlook it sometimes. The power analysis represents a way to show the meaningfulness of findings, regardless to the emphasized aspect of statistical significance. Within this statistical framework, the estimation of the effect size represents a means to show the relevance of the evidences produced through research. In this regard, this paper presents and discusses the main procedures to estimate the size of an effect with respect to the specific statistical test used for hypothesis testing. Thus, this work can be seen as an introduction and a guide for the reader interested in the use of effect size estimation for its scientific endeavour

    Confidence interval of percentiles in skewed distribution: The importance of the actual coverage probability in practical quality applications for laboratory medicine

    Get PDF
    Introduction: Quality indicators (QI) based on percentiles are widely used for managing quality in laboratory medicine nowadays. Due to their statistical nature, their estimation is affected by sampling so they should be always presented together with the confidence interval (CI). Since no methodological recommendation has been issued to date, our aim was investigating the suitability of the parametric method (LP-CI), the non-parametric binomial (NP-CI) and bootstrap (BCa-CI) procedures for the CI estimation of 2.5th, 25th, 50th, 75th and 97.5th percentile in skewed sets of data. Materials and methods: Skewness was reproduced by numeric simulation of a lognormal distribution in order to have samples with different right-tailing (moderate, heavy and very heavy) and size (20, 60 and 120). Performance was assessed with respect to the actual coverage probability (ACP, accuracy) against the confidence level of 1-α with α = 0.5, and the median interval length (MIL, precision). Results: The parametric method was accurate for sample size N ≥ 20 whereas both NP-CI and BCa-CI required N ≥ 60. However, for extreme percentiles of heavily right-tailed data, the required sample size increased to 60 and 120 units respectively. A case study also demonstrated the possibility to estimate the ACP from a single sample of real-life laboratory data. Conclusions: No method should be applied blindly to the estimation of CI, especially in small-sized and skewed samples. To this end, the accuracy of the method should be investigated through a numeric simulation that reproduces the same conditions of the real-life sample

    Preanalytical investigations of phlebotomy: methodological aspects, pitfalls and recommendations

    Get PDF
    Phlebotomy is often addressed as a crucial process in the pre-analytical phase, in which a large part of laboratory errors take place, but to date there is not yet a consolidated methodological paradigm. Seeking literature, we found 36 suitable investigations issued between 1996 and 2016 (April) dealing with the investigation of pre-analytical factors related to phlebotomy. We found that the largest part of studies had a cohort of healthy volunteers (22/36) or outpatients (11/36), with the former group showing a significantly smaller median sample size (N = 20, IQR: 17.5-30 and N = 88, IQR: 54.5-220.5 respectively, P < 0.001). Moreover, the largest part investigated one pre-analytical factor (26/36) and regarded more than one laboratory test (29/36), and authors preferably used paired Student’s t-test (17/36) or Wilcoxon’s test (11/36), but calibration (i.e. sample size calculation for a detectable effect) was addressed only in one manuscript. The Bland-Altman plot was often the preferred method used to estimate bias (12/36), as well as the Passing-Bablok regression for agreement (8/36). However, often papers did assess neither bias (12/36) nor agreement (24/36). Clinical significance of bias was preferably assessed comparing to a database value (16/36), and it resulted uncorrelated with the size of the effect produced by the factor (P = 0.142). However, the median effect size (ES) resulted significantly larger if the associated factor was clinically significant instead of non-significant (ES = 1.140, IQR: 0.815-1.700 and ES = 0.349, IQR: 0.228-0.531 respectively, P < 0.001). On these evidences, we discussed some recommendations for improving methodological consistency, delivering reliable results, as well as ensuring accessibility to practical evidences

    Timeliness “at a glance”: assessing the turnaround time through the six sigma metrics

    Get PDF
    Almost thirty years of systematic analysis have proven the turnaround time to be a fundamental dimension for the clinical laboratory. Several indicators are to date available to assess and report quality with respect to timeliness, but they sometimes lack the communicative immediacy and accuracy. The six sigma is a paradigm developed within the industrial domain for assessing quality and addressing goal and issues. The sigma level computed through the Z-score method is a simple and straightforward tool which delivers quality by a universal dimensionless scale and allows to handle non-normal data. Herein we report our preliminary experience in using the sigma level to assess the change in urgent (STAT) test turnaround time due to the implementation of total automation. We found that the Z-score method is a valuable and easy to use method for assessing and communicating the quality level of laboratory timeliness, providing a good correspondence with the actual change in efficiency which was retrospectively observed

    Six Sigma revisited: We need evidence to include a 1.5 SD shift in the extraanalytical phase of the total testing process

    Get PDF
    The Six Sigma methodology has been widely implemented in industry, healthcare, and laboratory medicine since the mid-1980s. The performance of a process is evaluated by the sigma metric (SM), and 6 sigma represents world class performance, which implies that only 3.4 or less defects (or errors) per million opportunities (DPMO) are expected to occur. However, statistically, 6 sigma corresponds to 0.002 DPMO rather than 3.4 DPMO. The reason for this difference is the introduction of a 1.5 standard deviation (SD) shift to account for the random variation of the process around its target. In contrast, a 1.5 SD shift should be taken into account for normally distributed data, such as the analytical phase of the total testing process; in practice, this shift has been included in all type of calculations related to SM including non-normally distributed data. This causes great deviation of the SM from the actual level. To ensure that the SM value accurately reflects process performance, we concluded that a 1.5 SD shift should be used where it is necessary and formally appropriate. Additionally, 1.5 SD shift should not be considered as a constant parameter automatically included in all calculations related to SM

    Phlebotomy, a bridge between laboratory and patient

    Get PDF
    The evidence-based paradigm has changed and evolved medical practice. Phlebotomy, which dates back to the age of ancient Greece, has gained experience through the evolution of medicine becoming a fundamental diagnostic tool. Nowadays it connects the patient with the clinical laboratory dimension building up a bridge. However, more often there is a gap between laboratory and phlebotomist that causes misunderstandings and burdens on patient safety. Therefore, the scope of this review is delivering a view of modern phlebotomy to “bridge” patient and laboratory. In this regard the paper describes devices, tools and procedures in the light of the most recent scientific findings, also discussing their impact on both quality of blood testing and patient safety. It also addresses the issues concerning medical aspect of venipuncture, like the practical approach to the superficial veins anatomy, as well as the management of the patient’s compliance with the blood draw. Thereby, the clinical, technical and practical issues are treated with the same relevance throughout the entire paper

    Confidence interval for quantiles and percentiles

    Get PDF
    Quantiles and percentiles represent useful statistical tools for describing the distribution of results and deriving reference intervals and performance specification in laboratory medicine. They are commonly intended as the sample estimate of a population parameter and therefore they need to be presented with a confidence interval (CI). In this work we discuss three methods to estimate CI on quantiles and percentiles using parametric, nonparametric and resampling (bootstrap) approaches. The result of our numerical simulations is that parametric methods are always more accurate regardless of sample size when the procedure is appropriate for the distribution of results for both extreme (2.5th and 97.5th) and central (25th, 50th and 75th) percentiles and corresponding quantiles. We also show that both nonparametric and bootstrap methods suit well the CI of central percentiles that are used to derive performance specifications through quality indicators of laboratory processes whose underlying distribution is unknown

    Monitoring the infection of SARS-CoV-2 and the development of diagnostic tools

    Get PDF
    Several issues are still unclear about the COVID-19 pandemic infection. The spreading of the infection throughout the world shows striking differences. In the present survey is described the prevalence of SARS-CoV-2 infection as reported in internationally updated online registers, comparing reported cases and deaths per million of inhabitants. Analysis of the data reflects a wide range among the continents and within each geographic area there are important differences among different countries. A focus on the Italian regions describes significant differences in terms of cases between North and South Italy in August 2020, a situation that reflects the diffusion of the SARS-CoV-2 infection in the period of February-April 2020. The scenario becomes completely different in October; indeed the number of cases and hospitalized patients shows an increase of 20-fold with respect to August 2020. Tools for the diagnosis of SARS-CoV-2 infection have become pivotal in the efforts to control the infection and monitor infected subjects. In the present report the different tests currently available are described as well as their usefulness in the present situation and their potential usage once the campaign for mass vaccination is effective
    corecore