103 research outputs found

    Reliability of brain atrophy measurements in multiple sclerosis using MRI: an assessment of six freely available software packages for cross-sectional analyses

    Get PDF
    PURPOSE: Volume measurement using MRI is important to assess brain atrophy in multiple sclerosis (MS). However, differences between scanners, acquisition protocols, and analysis software introduce unwanted variability of volumes. To quantify theses effects, we compared within-scanner repeatability and between-scanner reproducibility of three different MR scanners for six brain segmentation methods. METHODS: Twenty-one people with MS underwent scanning and rescanning on three 3 T MR scanners (GE MR750, Philips Ingenuity, Toshiba Vantage Titan) to obtain 3D T1-weighted images. FreeSurfer, FSL, SAMSEG, FastSurfer, CAT-12, and SynthSeg were used to quantify brain, white matter and (deep) gray matter volumes both from lesion-filled and non-lesion-filled 3D T1-weighted images. We used intra-class correlation coefficient (ICC) to quantify agreement; repeated-measures ANOVA to analyze systematic differences; and variance component analysis to quantify the standard error of measurement (SEM) and smallest detectable change (SDC). RESULTS: For all six software, both between-scanner agreement (ICCs ranging 0.4–1) and within-scanner agreement (ICC range: 0.6–1) were typically good, and good to excellent (ICC > 0.7) for large structures. No clear differences were found between filled and non-filled images. However, gray and white matter volumes did differ systematically between scanners for all software (p < 0.05). Variance component analysis yielded within-scanner SDC ranging from 1.02% (SAMSEG, whole-brain) to 14.55% (FreeSurfer, CSF); and between-scanner SDC ranging from 4.83% (SynthSeg, thalamus) to 29.25% (CAT12, thalamus). CONCLUSION: Volume measurements of brain, GM and WM showed high repeatability, and high reproducibility despite substantial differences between scanners. Smallest detectable change was high, especially between different scanners, which hampers the clinical implementation of atrophy measurements

    Inter-rater agreement and reliability of the COSMIN (COnsensus-based Standards for the selection of health status Measurement Instruments) Checklist

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The COSMIN checklist is a tool for evaluating the methodological quality of studies on measurement properties of health-related patient-reported outcomes. The aim of this study is to determine the inter-rater agreement and reliability of each item score of the COSMIN checklist (n = 114).</p> <p>Methods</p> <p>75 articles evaluating measurement properties were randomly selected from the bibliographic database compiled by the Patient-Reported Outcome Measurement Group, Oxford, UK. Raters were asked to assess the methodological quality of three articles, using the COSMIN checklist. In a one-way design, percentage agreement and intraclass kappa coefficients or quadratic-weighted kappa coefficients were calculated for each item.</p> <p>Results</p> <p>88 raters participated. Of the 75 selected articles, 26 articles were rated by four to six participants, and 49 by two or three participants. Overall, percentage agreement was appropriate (68% was above 80% agreement), and the kappa coefficients for the COSMIN items were low (61% was below 0.40, 6% was above 0.75). Reasons for low inter-rater agreement were need for subjective judgement, and accustom to different standards, terminology and definitions.</p> <p>Conclusions</p> <p>Results indicated that raters often choose the same response option, but that it is difficult on item level to distinguish between articles. When using the COSMIN checklist in a systematic review, we recommend getting some training and experience, completing it by two independent raters, and reaching consensus on one final rating. Instructions for using the checklist are improved.</p

    Interobserver variability studies in diagnostic imaging: a methodological systematic review

    Get PDF
    OBJECTIVES: To review the methodology of interobserver variability studies; including current practice and quality of conducting and reporting studies. METHODS: Interobserver variability studies between January 2019 and January 2020 were included; extracted data comprised of study characteristics, populations, variability measures, key results, and conclusions. Risk of bias was assessed using the COSMIN tool for assessing reliability and measurement error. RESULTS: Seventy-nine full-text studies were included covering various imaging tests and clinical areas. The median number of patients was 47 (IQR:23-88), and observers were 4 (IQR:2-7), with sample size justified in 12 (15%) studies. Most studies used static images (n = 75, 95%), where all observers interpreted images for all patients (n = 67, 85%). Intraclass correlation coefficients (ICC) (n = 41, 52%), Kappa (κ) statistics (n = 31, 39%) and percentage agreement (n = 15, 19%) were most commonly used. Interpretation of variability estimates often did not correspond with study conclusions. The COSMIN risk of bias tool gave a very good/adequate rating for 52 studies (66%) including any studies that used variability measures listed in the tool. For studies using static images, some study design standards were not applicable and did not contribute to the overall rating. CONCLUSIONS: Interobserver variability studies have diverse study designs and methods, the impact of which requires further evaluation. Sample size for patients and observers was often small without justification. Most studies report ICC and κ values, which did not always coincide with the study conclusion. High ratings were assigned to many studies using the COSMIN risk of bias tool, with certain standards scored 'not applicable' when static images were used. ADVANCES IN KNOWLEDGE: The sample size for both patients and observers was often small without justification. For most studies, observers interpreted static images and did not evaluate the process of acquiring the imaging test, meaning it was not possible to assess many COSMIN risk of bias standards for studies with this design. Most studies reported intraclass correlation coefficient and κ statistics; study conclusions often did not correspond with results

    Interobserver variability studies in diagnostic imaging:a methodological systematic review

    Get PDF
    Objectives: To review the methodology of interobserver variability studies; including current practice and quality of conducting and reporting studies. Methods: Interobserver variability studies between January 2019 and January 2020 were included; extracted data comprised of study characteristics, populations, variability measures, key results, and conclusions. Risk of bias was assessed using the COSMIN tool for assessing reliability and measurement error. Results: Seventy-nine full-text studies were included covering various imaging tests and clinical areas. The median number of patients was 47 (IQR:23–88), and observers were 4 (IQR:2–7), with sample size justified in 12 (15%) studies. Most studies used static images (n = 75, 95%), where all observers interpreted images for all patients (n = 67, 85%). Intraclass correlation coefficients (ICC) (n = 41, 52%), Kappa (κ) statistics (n = 31, 39%) and percentage agreement (n = 15, 19%) were most commonly used. Interpretation of variability estimates often did not correspond with study conclusions. The COSMIN risk of bias tool gave a very good/adequate rating for 52 studies (66%) including any studies that used variability measures listed in the tool. For studies using static images, some study design standards were not applicable and did not contribute to the overall rating. Conclusions: Interobserver variability studies have diverse study designs and methods, the impact of which requires further evaluation. Sample size for patients and observers was often small without justification. Most studies report ICC and κ values, which did not always coincide with the study conclusion. High ratings were assigned to many studies using the COSMIN risk of bias tool, with certain standards scored ‘not applicable’ when static images were used. Advances in knowledge: The sample size for both patients and observers was often small without justification. For most studies, observers interpreted static images and did not evaluate the process of acquiring the imaging test, meaning it was not possible to assess many COSMIN risk of bias standards for studies with this design. Most studies reported intraclass correlation coefficient and κ statistics; study conclusions often did not correspond with results

    Rating the methodological quality in systematic reviews of studies on measurement properties: a scoring system for the COSMIN checklist

    Get PDF
    Background: The COSMIN checklist is a standardized tool for assessing the methodological quality of studies on measurement properties. It contains 9 boxes, each dealing with one measurement property, with 5-18 items per box about design aspects and statistical methods. Our aim was to develop a scoring system for the COSMIN checklist to calculate quality scores per measurement property when using the checklist in systematic reviews of measurement properties. Methods: The scoring system was developed based on discussions among experts and testing of the scoring system on 46 articles from a systematic review. Four response options were defined for each COSMIN item (excellent, good, fair, and poor). A quality score per measurement property is obtained by taking the lowest rating of any item in a box ("worst score counts"). Results: Specific criteria for excellent, good, fair, and poor quality for each COSMIN item are described. In defining the criteria, the "worst score counts" algorithm was taken into consideration. This means that only fatal flaws were defined as poor quality. The scores of the 46 articles show how the scoring system can be used to provide an overview of the methodological quality of studies included in a systematic review of measurement properties. Conclusions: Based on experience in testing this scoring system on 46 articles, the COSMIN checklist with the proposed scoring system seems to be a useful tool for assessing the methodological quality of studies included in systematic reviews of measurement properties. © The Author(s) 2011

    Key Learning Outcomes for Clinical Pharmacology and Therapeutics Education in Europe:A Modified Delphi Study

    Get PDF
    Harmonizing clinical pharmacology and therapeutics (CPT) education in Europe is necessary to ensure that the prescribing competency of future doctors is of a uniform high standard. As there are currently no uniform requirements, our aim was to achieve consensus on key learning outcomes for undergraduate CPT education in Europe. We used a modified Delphi method consisting of three questionnaire rounds and a panel meeting. A total of 129 experts from 27 European countries were asked to rate 307 learning outcomes. In all, 92 experts (71%) completed all three questionnaire rounds, and 33 experts (26%) attended the meeting. 232 learning outcomes from the original list, 15 newly suggested and 5 rephrased outcomes were included. These 252 learning outcomes should be included in undergraduate CPT curricula to ensure that European graduates are able to prescribe safely and effectively. We provide a blueprint of a European core curriculum describing when and how the learning outcomes might be acquired.</p

    Methodological quality of 100 recent systematic reviews of health-related outcome measurement instruments:an overview of reviews

    Get PDF
    PURPOSE: Systematic reviews evaluating and comparing the measurement properties of outcome measurement instruments (OMIs) play an important role in OMI selection. Earlier overviews of review quality (2007, 2014) evidenced substantial concerns with regards to alignment to scientific standards. This overview aimed to investigate whether the quality of recent systematic reviews of OMIs lives up to the current scientific standards.METHODS: One hundred systematic reviews of OMIs published from June 1, 2021 onwards were randomly selected through a systematic literature search performed on March 17, 2022 in MEDLINE and EMBASE. The quality of systematic reviews was appraised by two independent reviewers. An updated data extraction form was informed by the earlier studies, and results were compared to these earlier studies' findings.RESULTS: A quarter of the reviews had an unclear research question or aim, and in 22% of the reviews the search strategy did not match the aim. Half of the reviews had an incomprehensive search strategy, because relevant search terms were not included. In 63% of the reviews (compared to 41% in 2014 and 30% in 2007) a risk of bias assessment was conducted. In 73% of the reviews (some) measurement properties were evaluated (58% in 2014 and 55% in 2007). In 60% of the reviews the data were (partly) synthesized (42% in 2014 and 7% in 2007); evaluation of measurement properties and data syntheses was not conducted separately for subscales in the majority. Certainty assessments of the quality of the total body of evidence were conducted in only 33% of reviews (not assessed in 2014 and 2007). The majority (58%) did not make any recommendations on which OMI (not) to use.CONCLUSION: Despite clear improvements in risk of bias assessments, measurement property evaluation and data synthesis, specifying the research question, conducting the search strategy and performing a certainty assessment remain poor. To ensure that systematic reviews of OMIs meet current scientific standards, more consistent conduct and reporting of systematic reviews of OMIs is needed.</p

    Methodological quality of 100 recent systematic reviews of health-related outcome measurement instruments:an overview of reviews

    Get PDF
    Purpose: Systematic reviews evaluating and comparing the measurement properties of outcome measurement instruments (OMIs) play an important role in OMI selection. Earlier overviews of review quality (2007, 2014) evidenced substantial concerns with regards to alignment to scientific standards. This overview aimed to investigate whether the quality of recent systematic reviews of OMIs lives up to the current scientific standards.Methods: One hundred systematic reviews of OMIs published from June 1, 2021 onwards were randomly selected through a systematic literature search performed on March 17, 2022 in MEDLINE and EMBASE. The quality of systematic reviews was appraised by two independent reviewers. An updated data extraction form was informed by the earlier studies, and results were compared to these earlier studies’ findings.Results: A quarter of the reviews had an unclear research question or aim, and in 22% of the reviews the search strategy did not match the aim. Half of the reviews had an incomprehensive search strategy, because relevant search terms were not included. In 63% of the reviews (compared to 41% in 2014 and 30% in 2007) a risk of bias assessment was conducted. In 73% of the reviews (some) measurement properties were evaluated (58% in 2014 and 55% in 2007). In 60% of the reviews the data were (partly) synthesized (42% in 2014 and 7% in 2007); evaluation of measurement properties and data syntheses was not conducted separately for subscales in the majority. Certainty assessments of the quality of the total body of evidence were conducted in only 33% of reviews (not assessed in 2014 and 2007). The majority (58%) did not make any recommendations on which OMI (not) to use.Conclusion: Despite clear improvements in risk of bias assessments, measurement property evaluation and data synthesis, specifying the research question, conducting the search strategy and performing a certainty assessment remain poor. To ensure that systematic reviews of OMIs meet current scientific standards, more consistent conduct and reporting of systematic reviews of OMIs is needed

    Methodological quality of 100 recent systematic reviews of health-related outcome measurement instruments:an overview of reviews

    Get PDF
    Purpose: Systematic reviews evaluating and comparing the measurement properties of outcome measurement instruments (OMIs) play an important role in OMI selection. Earlier overviews of review quality (2007, 2014) evidenced substantial concerns with regards to alignment to scientific standards. This overview aimed to investigate whether the quality of recent systematic reviews of OMIs lives up to the current scientific standards.Methods: One hundred systematic reviews of OMIs published from June 1, 2021 onwards were randomly selected through a systematic literature search performed on March 17, 2022 in MEDLINE and EMBASE. The quality of systematic reviews was appraised by two independent reviewers. An updated data extraction form was informed by the earlier studies, and results were compared to these earlier studies’ findings.Results: A quarter of the reviews had an unclear research question or aim, and in 22% of the reviews the search strategy did not match the aim. Half of the reviews had an incomprehensive search strategy, because relevant search terms were not included. In 63% of the reviews (compared to 41% in 2014 and 30% in 2007) a risk of bias assessment was conducted. In 73% of the reviews (some) measurement properties were evaluated (58% in 2014 and 55% in 2007). In 60% of the reviews the data were (partly) synthesized (42% in 2014 and 7% in 2007); evaluation of measurement properties and data syntheses was not conducted separately for subscales in the majority. Certainty assessments of the quality of the total body of evidence were conducted in only 33% of reviews (not assessed in 2014 and 2007). The majority (58%) did not make any recommendations on which OMI (not) to use.Conclusion: Despite clear improvements in risk of bias assessments, measurement property evaluation and data synthesis, specifying the research question, conducting the search strategy and performing a certainty assessment remain poor. To ensure that systematic reviews of OMIs meet current scientific standards, more consistent conduct and reporting of systematic reviews of OMIs is needed
    corecore