128 research outputs found

    Important Topics for Fostering Research Integrity by Research Performing and Research Funding Organizations:A Delphi Consensus Study

    Get PDF
    To foster research integrity (RI), it is necessary to address the institutional and system-of-science factors that influence researchersā€™ behavior. Consequently, research performing and research funding organizations (RPOs and RFOs) could develop comprehensive RI policies outlining the concrete steps they will take to foster RI. So far, there is no consensus on which topics are important to address in RI policies. Therefore, we conducted a three round Delphi survey study to explore which RI topics to address in institutional RI policies by seeking consensus from research policy experts and institutional leaders. A total of 68 RPO and 52 RFO experts, representing different disciplines, countries and genders, completed one, two or all rounds of the study. There was consensus among the experts on the importance of 12 RI topics for RPOs and 11 for RFOs. The topics that ranked highest for RPOs concerned education and training, supervision and mentoring, dealing with RI breaches, and supporting a responsible research process (e.g. through quality assurance). The highest ranked RFO topics concerned dealing with breaches of RI, conflicts of interest, and setting expectations on RPOs (e.g. about educating researchers about RI). Together with the research policy experts and institutional leaders, we developed a comprehensive overview of topics important for inclusion in theĀ RI policies of RPOs and RFOs. The topics reflect preference for a preventative approach to RI, coupled with procedures for dealing with RI breaches. RPOs and RFOs should address each of these topics in order to support researchers in conducting responsible research. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s11948-021-00322-9

    CIVIL JURISDICTION, INTELLECTUAL PROPERTY AND THE INTERNET

    Get PDF
    At the core of the civil litigation system is the notion of jurisdiction. In a narrow sense it refers to whether a court has the authority to hear a case in relation to specific people and activities (subject matter) but in a broader sense it also encompasses what law should be applied (choice of law), whether the court is a suitable court to hear the case (choice of court) and the enforcement of judgements. The notion of jurisdiction provides a tool for efficiently managing litigation and traditionally has been based upon notions of connection to a particular territory. In the global transnational world of the Internet the concept of jurisdiction has struggled to find a sensible meaning.1 Does jurisdiction lie everywhere that the Internet runs or is it more narrowly defined? In this chapter we examine recent cases concerning jurisdiction and the Internet before the courts of the Peopleā€™s Republic of China (PRC) in matters relating to intellectual property. We also consider decisions in Australia and the United States of America (US) and international developments in the area

    Reliability of brain atrophy measurements in multiple sclerosis using MRI: an assessment of six freely available software packages for cross-sectional analyses

    Get PDF
    PURPOSE: Volume measurement using MRI is important to assess brain atrophy in multiple sclerosis (MS). However, differences between scanners, acquisition protocols, and analysis software introduce unwanted variability of volumes. To quantify theses effects, we compared within-scanner repeatability and between-scanner reproducibility of three different MR scanners for six brain segmentation methods. METHODS: Twenty-one people with MS underwent scanning and rescanning on three 3 T MR scanners (GE MR750, Philips Ingenuity, Toshiba Vantage Titan) to obtain 3D T1-weighted images. FreeSurfer, FSL, SAMSEG, FastSurfer, CAT-12, and SynthSeg were used to quantify brain, white matter and (deep) gray matter volumes both from lesion-filled and non-lesion-filled 3D T1-weighted images. We used intra-class correlation coefficient (ICC) to quantify agreement; repeated-measures ANOVA to analyze systematic differences; and variance component analysis to quantify the standard error of measurement (SEM) and smallest detectable change (SDC). RESULTS: For all six software, both between-scanner agreement (ICCs ranging 0.4ā€“1) and within-scanner agreement (ICC range: 0.6ā€“1) were typically good, and good to excellent (ICCā€‰>ā€‰0.7) for large structures. No clear differences were found between filled and non-filled images. However, gray and white matter volumes did differ systematically between scanners for all software (pā€‰<ā€‰0.05). Variance component analysis yielded within-scanner SDC ranging from 1.02% (SAMSEG, whole-brain) to 14.55% (FreeSurfer, CSF); and between-scanner SDC ranging from 4.83% (SynthSeg, thalamus) to 29.25% (CAT12, thalamus). CONCLUSION: Volume measurements of brain, GM and WM showed high repeatability, and high reproducibility despite substantial differences between scanners. Smallest detectable change was high, especially between different scanners, which hampers the clinical implementation of atrophy measurements

    Interobserver variability studies in diagnostic imaging: a methodological systematic review

    Get PDF
    OBJECTIVES: To review the methodology of interobserver variability studies; including current practice and quality of conducting and reporting studies. METHODS: Interobserver variability studies between January 2019 and January 2020 were included; extracted data comprised of study characteristics, populations, variability measures, key results, and conclusions. Risk of bias was assessed using the COSMIN tool for assessing reliability and measurement error. RESULTS: Seventy-nine full-text studies were included covering various imaging tests and clinical areas. The median number of patients was 47 (IQR:23-88), and observers were 4 (IQR:2-7), with sample size justified in 12 (15%) studies. Most studies used static images (n = 75, 95%), where all observers interpreted images for all patients (n = 67, 85%). Intraclass correlation coefficients (ICC) (n = 41, 52%), Kappa (Īŗ) statistics (n = 31, 39%) and percentage agreement (n = 15, 19%) were most commonly used. Interpretation of variability estimates often did not correspond with study conclusions. The COSMIN risk of bias tool gave a very good/adequate rating for 52 studies (66%) including any studies that used variability measures listed in the tool. For studies using static images, some study design standards were not applicable and did not contribute to the overall rating. CONCLUSIONS: Interobserver variability studies have diverse study designs and methods, the impact of which requires further evaluation. Sample size for patients and observers was often small without justification. Most studies report ICC and Īŗ values, which did not always coincide with the study conclusion. High ratings were assigned to many studies using the COSMIN risk of bias tool, with certain standards scored 'not applicable' when static images were used. ADVANCES IN KNOWLEDGE: The sample size for both patients and observers was often small without justification. For most studies, observers interpreted static images and did not evaluate the process of acquiring the imaging test, meaning it was not possible to assess many COSMIN risk of bias standards for studies with this design. Most studies reported intraclass correlation coefficient and Īŗ statistics; study conclusions often did not correspond with results

    Inter-rater agreement and reliability of the COSMIN (COnsensus-based Standards for the selection of health status Measurement Instruments) Checklist

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The COSMIN checklist is a tool for evaluating the methodological quality of studies on measurement properties of health-related patient-reported outcomes. The aim of this study is to determine the inter-rater agreement and reliability of each item score of the COSMIN checklist (n = 114).</p> <p>Methods</p> <p>75 articles evaluating measurement properties were randomly selected from the bibliographic database compiled by the Patient-Reported Outcome Measurement Group, Oxford, UK. Raters were asked to assess the methodological quality of three articles, using the COSMIN checklist. In a one-way design, percentage agreement and intraclass kappa coefficients or quadratic-weighted kappa coefficients were calculated for each item.</p> <p>Results</p> <p>88 raters participated. Of the 75 selected articles, 26 articles were rated by four to six participants, and 49 by two or three participants. Overall, percentage agreement was appropriate (68% was above 80% agreement), and the kappa coefficients for the COSMIN items were low (61% was below 0.40, 6% was above 0.75). Reasons for low inter-rater agreement were need for subjective judgement, and accustom to different standards, terminology and definitions.</p> <p>Conclusions</p> <p>Results indicated that raters often choose the same response option, but that it is difficult on item level to distinguish between articles. When using the COSMIN checklist in a systematic review, we recommend getting some training and experience, completing it by two independent raters, and reaching consensus on one final rating. Instructions for using the checklist are improved.</p

    Interobserver variability studies in diagnostic imaging:a methodological systematic review

    Get PDF
    Objectives: To review the methodology of interobserver variability studies; including current practice and quality of conducting and reporting studies. Methods: Interobserver variability studies between January 2019 and January 2020 were included; extracted data comprised of study characteristics, populations, variability measures, key results, and conclusions. Risk of bias was assessed using the COSMIN tool for assessing reliability and measurement error. Results: Seventy-nine full-text studies were included covering various imaging tests and clinical areas. The median number of patients was 47 (IQR:23ā€“88), and observers were 4 (IQR:2ā€“7), with sample size justified in 12 (15%) studies. Most studies used static images (n = 75, 95%), where all observers interpreted images for all patients (n = 67, 85%). Intraclass correlation coefficients (ICC) (n = 41, 52%), Kappa (Īŗ) statistics (n = 31, 39%) and percentage agreement (n = 15, 19%) were most commonly used. Interpretation of variability estimates often did not correspond with study conclusions. The COSMIN risk of bias tool gave a very good/adequate rating for 52 studies (66%) including any studies that used variability measures listed in the tool. For studies using static images, some study design standards were not applicable and did not contribute to the overall rating. Conclusions: Interobserver variability studies have diverse study designs and methods, the impact of which requires further evaluation. Sample size for patients and observers was often small without justification. Most studies report ICC and Īŗ values, which did not always coincide with the study conclusion. High ratings were assigned to many studies using the COSMIN risk of bias tool, with certain standards scored ā€˜not applicableā€™ when static images were used. Advances in knowledge: The sample size for both patients and observers was often small without justification. For most studies, observers interpreted static images and did not evaluate the process of acquiring the imaging test, meaning it was not possible to assess many COSMIN risk of bias standards for studies with this design. Most studies reported intraclass correlation coefficient and Īŗ statistics; study conclusions often did not correspond with results

    Evaluation of measurement properties of health-related quality of life instruments for burns: A systematic review

    Get PDF
    BACKGROUND: Health-related quality of life (HRQL) is a key outcome in the evaluation of burn treatment. Health-related quality of life instruments with robust measurement properties are required to provide high-quality evidence to improve patient care. The aim of this review was to critically appraise the measurement properties of HRQL instruments used in burns. METHODS: A systematic search was conducted in Embase, MEDLINE, CINAHL, Cochrane, Web of Science, and Google scholar to reveal articles on the development and/or validation of HRQL instruments in burns. Measurement properties were assessed using the Consensus-based Standards for the selection of health Measurement Instruments methodology. A modified Grading of Recommendations, Assessment, Development, and Evaluation analysis was used to assess risk of bias (prospero ID, CRD42016048065). RESULTS: Forty-three articles covering 15 HRQL instruments (12 disease-specific and 3 generic instruments) were included. Methodological quality and evidence on measurement properties varied widely. None of the instruments provided enough evidence on their measurement properties to be highly recommended for routine use; however, two instruments had somewhat more favorable measurement properties. The Burn-Specific Health Scale-Brief (BSHS-B) is easy to use, widely accessible, and demonstrated sufficient evidence for most measurement properties. The Brisbane Burn Scar Impact Profiles were the only instruments with high-quality evidence for content validity. CONCLUSION: The Burn Specific Health Scale-Brief (burn-specific HRQL) and the Brisbane Burn Scar Impact Profile (burn scar HRQL) instruments have the best measurement properties. There is only weak evidence on the measurement properties of generic HRQL instruments in burn patients. Results of this study form important input to reach consensus on a universally used instrument to assess HRQL in burn patients. LEVEL OF EVIDENCE: Systematic review, level III

    Rating the methodological quality in systematic reviews of studies on measurement properties: a scoring system for the COSMIN checklist

    Get PDF
    Background: The COSMIN checklist is a standardized tool for assessing the methodological quality of studies on measurement properties. It contains 9 boxes, each dealing with one measurement property, with 5-18 items per box about design aspects and statistical methods. Our aim was to develop a scoring system for the COSMIN checklist to calculate quality scores per measurement property when using the checklist in systematic reviews of measurement properties. Methods: The scoring system was developed based on discussions among experts and testing of the scoring system on 46 articles from a systematic review. Four response options were defined for each COSMIN item (excellent, good, fair, and poor). A quality score per measurement property is obtained by taking the lowest rating of any item in a box ("worst score counts"). Results: Specific criteria for excellent, good, fair, and poor quality for each COSMIN item are described. In defining the criteria, the "worst score counts" algorithm was taken into consideration. This means that only fatal flaws were defined as poor quality. The scores of the 46 articles show how the scoring system can be used to provide an overview of the methodological quality of studies included in a systematic review of measurement properties. Conclusions: Based on experience in testing this scoring system on 46 articles, the COSMIN checklist with the proposed scoring system seems to be a useful tool for assessing the methodological quality of studies included in systematic reviews of measurement properties. Ā© The Author(s) 2011

    Key Learning Outcomes for Clinical Pharmacology and Therapeutics Education in Europe:A Modified Delphi Study

    Get PDF
    Harmonizing clinical pharmacology and therapeutics (CPT) education in Europe is necessary to ensure that the prescribing competency of future doctors is of a uniform high standard. As there are currently no uniform requirements, our aim was to achieve consensus on key learning outcomes for undergraduate CPT education in Europe. We used a modified Delphi method consisting of three questionnaire rounds and a panel meeting. A total of 129 experts from 27 European countries were asked to rate 307 learning outcomes. In all, 92 experts (71%) completed all three questionnaire rounds, and 33 experts (26%) attended the meeting. 232 learning outcomes from the original list, 15 newly suggested and 5 rephrased outcomes were included. These 252 learning outcomes should be included in undergraduate CPT curricula to ensure that European graduates are able to prescribe safely and effectively. We provide a blueprint of a European core curriculum describing when and how the learning outcomes might be acquired.</p
    • ā€¦
    corecore