14 research outputs found

    Our research’s breadth lives on convenience samples A case study of the online respondent pool “SoSci Panel”

    No full text
    Convenience samples have been a substantial driver of empirical social research for decades. Undergraduate students are still the researchers’ favorite subjects, but the importance of respondents recruited on the Internet is on the rise. This paper deals with the fuzzy concept of convenience samples, outlining their reasonable uses and limitations. To bolster the theoretical discussion on convenience samples with empirical evidence, findings from the non-commercial SoSci Panel, a large-scale volunteer respondent pool, are presented. Convenience pools allow for larger samples than traditional convenience samples, more heterogeneity, and better long-term availability of respondents. This paper discusses conditions of setting up a respondent pool and methodological and practical implications, such as software, tasks, respondent activity and panel loyalty

    Too Fast, too Straight, too Weird: Non-Reactive Indicators for Meaningless Data in Internet Surveys

    Full text link
    Practitioners use various indicators to screen for meaningless, careless, or fraudulent responses in Internet surveys. This study employs an experimental-like design to empirically test the ability of non-reactive indicators to identify records with low data quality. Findings suggest that careless responses are most reliably identified by questionnaire completion time, but the tested indicators do not allow for detecting intended faking. The article introduces various indicators, their benefits and drawbacks, proposes a completion speed index for common application in data cleaning, and discusses whether to remove meaningless records at all

    The impact of dose reduction on the quantification of coronary artery calcifications and risk categorization:A systematic review

    No full text
    Multiple dose reduction techniques have been introduced for coronary artery calcium (CAC) computed tomography (CT), but few have emerged into clinical practice while an increasing number of patients undergo CAC scanning. We sought to determine to what extend the radiation dose in CAC CT can be safely reduced without a significant impact on cardiovascular disease (CVD) risk stratification. A systematic database-review of articles published from 2002 until February 2018 was performed in Pubmed, WebOfScience, and Embase. Eligible studies reported radiation dose reduction for CAC CT, calcium scores and/or risk stratification for phantom or patient studies. Twenty-eight studies were included, under which 17 patient studies, 10 phantom/ex-vivo studies, and 1 study evaluated both phantom and patients. Dose was reduced with tube voltage reduction and tube current reduction with and without iterative reconstruction (IR), and tin-filter spectral shaping. The different dose reduction techniques resulted in varying final radiation doses and had varying impact on CAC scores and CVD risk stratification. In 78% of the studies the radiation dose was reduced by >= 50% ranging from (CTDIvol) 0.6-5.5 mGy, leading to reclassification rates ranging between 3% and 21%, depending on the acquisition technique. Specific dose reduced protocols, including either tube current reduction and IR or spectral shaping with tin filtration, that showed low reclassification rates may potentially be used in CAC scanning and in future population-based screening for CVD risk stratification

    The impact of dose reduction on the quantification of coronary artery calcifications and risk categorization : A systematic review

    No full text
    Multiple dose reduction techniques have been introduced for coronary artery calcium (CAC) computed tomography (CT), but few have emerged into clinical practice while an increasing number of patients undergo CAC scanning. We sought to determine to what extend the radiation dose in CAC CT can be safely reduced without a significant impact on cardiovascular disease (CVD) risk stratification. A systematic database-review of articles published from 2002 until February 2018 was performed in Pubmed, WebOfScience, and Embase. Eligible studies reported radiation dose reduction for CAC CT, calcium scores and/or risk stratification for phantom or patient studies. Twenty-eight studies were included, under which 17 patient studies, 10 phantom/ex-vivo studies, and 1 study evaluated both phantom and patients. Dose was reduced with tube voltage reduction and tube current reduction with and without iterative reconstruction (IR), and tin-filter spectral shaping. The different dose reduction techniques resulted in varying final radiation doses and had varying impact on CAC scores and CVD risk stratification. In 78% of the studies the radiation dose was reduced by ≥ 50% ranging from (CTDIvol) 0.6-5.5 mGy, leading to reclassification rates ranging between 3% and 21%, depending on the acquisition technique. Specific dose reduced protocols, including either tube current reduction and IR or spectral shaping with tin filtration, that showed low reclassification rates may potentially be used in CAC scanning and in future population-based screening for CVD risk stratification

    Fully automated quantification method (FQM) of coronary calcium in an anthropomorphic phantom

    Get PDF
    Objective: Coronary artery calcium (CAC) score is a strong predictor for future adverse cardiovascular events. Anthropomorphic phantoms are often used for CAC studies on computed tomography (CT) to allow for evaluation or variation of scanning or reconstruction parameters within or across scanners against a reference standard. This often results in large number of datasets. Manual assessment of these large datasets is time consuming and cumbersome. Therefore, this study aimed to develop and validate a fully automated, open-source quantification method (FQM) for coronary calcium in a standardized phantom. Materials and Methods: A standard, commercially available anthropomorphic thorax phantom was used with an insert containing nine calcifications with different sizes and densities. To simulate two different patient sizes, an extension ring was used. Image data were acquired with four state-of-the-art CT systems using routine CAC scoring acquisition protocols. For interscan variability, each acquisition was repeated five times with small translations and/or rotations. Vendor-specific CAC scores (Agatston, volume, and mass) were calculated as reference scores using vendor-specific software. Both the international standard CAC quantification methods as well as vendor-specific adjustments were implemented in FQM. Reference and FQM scores were compared using Bland-Altman analysis, intraclass correlation coefficients, risk reclassifications, and Cohen’s kappa. Also, robustness of FQM was assessed using varied acquisitions and reconstruction settings and validation on a dynamic phantom. Further, image quality metrics were implemented: noise power spectrum, task transfer function, and contrast- and signal-to-noise ratio among others. Results were validated using imQuest software. Results: Three parameters in CAC scoring methods varied among the different vendor-specific software packages: the Hounsfield unit (HU) threshold, the minimum area used to designate a group of voxels as calcium, and the usage of isotropic voxels for the volume score. The FQM was in high agreement with vendor-specific scores and ICC’s (median [95% CI]) were excellent (1.000 [0.999-1.000] to 1.000 [1.000-1.000]). An excellent interplatform reliability of κ = 0.969 and κ = 0.973 was found. TTF results gave a maximum deviation of 3.8% and NPS results were comparable to imQuest. Conclusions: We developed a fully automated, open-source, robust method to quantify CAC on CT scans in a commercially available phantom. Also, the automated algorithm contains image quality assessment for fast comparison of differences in acquisition and reconstruction parameters.</p

    Coronary Artery Calcium Scoring:Toward a New Standard

    Get PDF
    OBJECTIVES: Although the Agatston score is a commonly used quantification method, rescan reproducibility is suboptimal, and different CT scanners result in different scores. In 2007, McCollough et al (Radiology 2007;243:527-538) proposed a standard for coronary artery calcium quantification. Advancements in CT technology over the last decade, however, allow for improved acquisition and reconstruction methods. This study aims to investigate the feasibility of a reproducible reduced dose alternative of the standardized approach for coronary artery calcium quantification on state-of-the-art CT systems from 4 major vendors. MATERIALS AND METHODS: An anthropomorphic phantom containing 9 calcifications and 2 extension rings were used. Images were acquired with 4 state-of-the-art CT systems using routine protocols and a variety of tube voltages (80-120 kV), tube currents (100% to 25% dose levels), slice thicknesses (3/2.5 and 1/1.25 mm), and reconstruction techniques (filtered back projection and iterative reconstruction). Every protocol was scanned 5 times after repositioning the phantom to assess reproducibility. Calcifications were quantified as Agatston scores. RESULTS: Reducing tube voltage to 100 kV, dose to 75%, and slice thickness to 1 or 1.25 mm combined with higher iterative reconstruction levels resulted in an on average 36% lower intrascanner variability (interquartile range) compared with the standard 120 kV protocol. Interscanner variability per phantom size decreased by 34% on average. With the standard protocol, on average, 6.2 ± 0.4 calcifications were detected, whereas 7.0 ± 0.4 were detected with the proposed protocol. Pairwise comparisons of Agatston scores between scanners within the same phantom size demonstrated 3 significantly different comparisons at the standard protocol (P 0.05). CONCLUSIONS: On state-of-the-art CT systems of 4 different vendors, a 25% reduced dose, thin-slice calcium scoring protocol led to improved intrascanner and interscanner reproducibility and increased detectability of small and low-density calcifications in this phantom. The protocol should be extensively validated before clinical use, but it could potentially improve clinical interscanner/interinstitutional reproducibility and enable more consistent risk assessment and treatment strategies
    corecore