954 research outputs found

    The logic of equivalence testing and its use in laboratory medicine

    Get PDF
    Hypothesis testing is a methodological paradigm widely popularized outside the field of pure statistics, and nowadays more or less familiar to the largest part of biomedical researchers. Conversely, the equivalence testing is still somehow obscure and misunderstood, although it represents a conceptual mainstay for some biomedical fields like pharmacology. In order to appreciate the way it could suit laboratory medicine, it is necessary to understand the philosophy behind it, and in turn how it stemmed and differentiated along the history of classical hypothesis testing. Here we present the framework of equivalence testing, the various tests used to assess equivalence and discuss their applicability to laboratory medicine research and issues

    Blood alcohol concentration in the clinical laboratory: a narrative review of the preanalytical phase in diagnostic and forensic testing

    Get PDF
    The analysis of blood alcohol concentration (BAC), a pivotal toxicological test, concerns acute alcohol intoxication (AAI) and driving under the influence (DUI). As such, BAC presents an organizational challenge for clinical laboratories, with unique complexities due to the need for forensic defensibility as part of the diagnostic process. Unfortunately, a significant number of scientific investigations dealing with the subject present discrepancies that make it difficult to identify optimal practices in sample collection, transportation, handling, and preparation. This review provides a systematic analysis of the preanalytical phase of BAC that aims to identify and explain the chemical, physiological, and pharmacological mechanisms underlying controllable operational factors. Nevertheless, it seeks evidence for the necessity to separate preanalytical processes for diagnostic and forensic BAC testing. In this regard, the main finding of this review is that no literature evidence supports the necessity to differentiate preanalytical procedures for AAI and DUI, except for the traceability throughout the chain of custody. In fact, adhering to correct preanalytical procedures provided by official bodies such as European federation of clinical chemistry and laboratory medicine for routine phlebotomy ensures both diagnostic accuracy and forensic defensibility of BAC. This is shown to depend on the capability of modern pre-evacuated sterile collection tubes to control major factors influencing BAC, namely non-enzymatic oxidation and microbial contamination. While certain restrictions become obsolete with such devices, as the use of sodium fluoride (NaF) for specific preservation of forensic BAC, this review reinforces the recommendation to use non-alcoholic disinfectants as a means to achieve “error-proof” procedures in challenging operational environments like the emergency department

    Understanding the effect size and its measures.

    Get PDF
    The evidence based medicine paradigm demands scientific reliability, but modern research seems to overlook it sometimes. The power analysis represents a way to show the meaningfulness of findings, regardless to the emphasized aspect of statistical significance. Within this statistical framework, the estimation of the effect size represents a means to show the relevance of the evidences produced through research. In this regard, this paper presents and discusses the main procedures to estimate the size of an effect with respect to the specific statistical test used for hypothesis testing. Thus, this work can be seen as an introduction and a guide for the reader interested in the use of effect size estimation for its scientific endeavour

    Confidence interval of percentiles in skewed distribution: The importance of the actual coverage probability in practical quality applications for laboratory medicine

    Get PDF
    Introduction: Quality indicators (QI) based on percentiles are widely used for managing quality in laboratory medicine nowadays. Due to their statistical nature, their estimation is affected by sampling so they should be always presented together with the confidence interval (CI). Since no methodological recommendation has been issued to date, our aim was investigating the suitability of the parametric method (LP-CI), the non-parametric binomial (NP-CI) and bootstrap (BCa-CI) procedures for the CI estimation of 2.5th, 25th, 50th, 75th and 97.5th percentile in skewed sets of data. Materials and methods: Skewness was reproduced by numeric simulation of a lognormal distribution in order to have samples with different right-tailing (moderate, heavy and very heavy) and size (20, 60 and 120). Performance was assessed with respect to the actual coverage probability (ACP, accuracy) against the confidence level of 1-α with α = 0.5, and the median interval length (MIL, precision). Results: The parametric method was accurate for sample size N ≥ 20 whereas both NP-CI and BCa-CI required N ≥ 60. However, for extreme percentiles of heavily right-tailed data, the required sample size increased to 60 and 120 units respectively. A case study also demonstrated the possibility to estimate the ACP from a single sample of real-life laboratory data. Conclusions: No method should be applied blindly to the estimation of CI, especially in small-sized and skewed samples. To this end, the accuracy of the method should be investigated through a numeric simulation that reproduces the same conditions of the real-life sample

    Preanalytical investigations of phlebotomy: methodological aspects, pitfalls and recommendations

    Get PDF
    Phlebotomy is often addressed as a crucial process in the pre-analytical phase, in which a large part of laboratory errors take place, but to date there is not yet a consolidated methodological paradigm. Seeking literature, we found 36 suitable investigations issued between 1996 and 2016 (April) dealing with the investigation of pre-analytical factors related to phlebotomy. We found that the largest part of studies had a cohort of healthy volunteers (22/36) or outpatients (11/36), with the former group showing a significantly smaller median sample size (N = 20, IQR: 17.5-30 and N = 88, IQR: 54.5-220.5 respectively, P < 0.001). Moreover, the largest part investigated one pre-analytical factor (26/36) and regarded more than one laboratory test (29/36), and authors preferably used paired Student’s t-test (17/36) or Wilcoxon’s test (11/36), but calibration (i.e. sample size calculation for a detectable effect) was addressed only in one manuscript. The Bland-Altman plot was often the preferred method used to estimate bias (12/36), as well as the Passing-Bablok regression for agreement (8/36). However, often papers did assess neither bias (12/36) nor agreement (24/36). Clinical significance of bias was preferably assessed comparing to a database value (16/36), and it resulted uncorrelated with the size of the effect produced by the factor (P = 0.142). However, the median effect size (ES) resulted significantly larger if the associated factor was clinically significant instead of non-significant (ES = 1.140, IQR: 0.815-1.700 and ES = 0.349, IQR: 0.228-0.531 respectively, P < 0.001). On these evidences, we discussed some recommendations for improving methodological consistency, delivering reliable results, as well as ensuring accessibility to practical evidences

    The survey of the Basilica di Collemaggio in L’Aquila with a system of terrestrial imaging and most proven techniques

    Get PDF
    The proposed job concerns the evaluation of a series of surveys carried out in the context of a campaign of studies begun in 2015 with the objective of comparing the accuracies obtainable with the systems of terrestrial imaging, compared to unmanned aerial vehicle imaging and laser scanner survey. In particular, the authors want to test the applicability of a system of imaging rover (IR), an innovative terrestrial imaging system, that consists of a multi-camera with integrated global positioning system (GPS)/global navigation satellite system (GNSS) receiver, that is very recently released technique, and only a few literature references exist on the specific subject. In detail, the IR consists of a total of 12 calibrated cameras – seven “panorama” and five downward-looking – providing complete site documentation that can potentially be used to make photogrammetric measurements. The data acquired in this experimentation were then elaborated with various software packages in order to obtain point clouds and a three-dimensional model in different cases, and a comparison of the various results obtained was carried out. Following, the case study of the Basilica di Santa Maria di Collemaggio in L’Aquila is reported; Collemaggio is an UNESCO world heritage site; it was damaged during the seismic event of 2009, and its restoration is still in progress

    Timeliness “at a glance”: assessing the turnaround time through the six sigma metrics

    Get PDF
    Almost thirty years of systematic analysis have proven the turnaround time to be a fundamental dimension for the clinical laboratory. Several indicators are to date available to assess and report quality with respect to timeliness, but they sometimes lack the communicative immediacy and accuracy. The six sigma is a paradigm developed within the industrial domain for assessing quality and addressing goal and issues. The sigma level computed through the Z-score method is a simple and straightforward tool which delivers quality by a universal dimensionless scale and allows to handle non-normal data. Herein we report our preliminary experience in using the sigma level to assess the change in urgent (STAT) test turnaround time due to the implementation of total automation. We found that the Z-score method is a valuable and easy to use method for assessing and communicating the quality level of laboratory timeliness, providing a good correspondence with the actual change in efficiency which was retrospectively observed

    Six Sigma revisited: We need evidence to include a 1.5 SD shift in the extraanalytical phase of the total testing process

    Get PDF
    The Six Sigma methodology has been widely implemented in industry, healthcare, and laboratory medicine since the mid-1980s. The performance of a process is evaluated by the sigma metric (SM), and 6 sigma represents world class performance, which implies that only 3.4 or less defects (or errors) per million opportunities (DPMO) are expected to occur. However, statistically, 6 sigma corresponds to 0.002 DPMO rather than 3.4 DPMO. The reason for this difference is the introduction of a 1.5 standard deviation (SD) shift to account for the random variation of the process around its target. In contrast, a 1.5 SD shift should be taken into account for normally distributed data, such as the analytical phase of the total testing process; in practice, this shift has been included in all type of calculations related to SM including non-normally distributed data. This causes great deviation of the SM from the actual level. To ensure that the SM value accurately reflects process performance, we concluded that a 1.5 SD shift should be used where it is necessary and formally appropriate. Additionally, 1.5 SD shift should not be considered as a constant parameter automatically included in all calculations related to SM
    • …
    corecore