6,054 research outputs found

    Development and validation of the guideline for reporting evidence-based practice educational interventions and teaching (GREET)

    Get PDF
    Abstract Background The majority of reporting guidelines assist researchers to report consistent information concerning study design, however, they contain limited information for describing study interventions. Using a three-stage development process, the Guideline for Reporting Evidence-based practice Educational interventions and Teaching (GREET) checklist and accompanying explanatory paper were developed to provide guidance for the reporting of educational interventions for evidence-based practice (EBP). The aim of this study was to complete the final development for the GREET checklist, incorporating psychometric testing to determine inter-rater reliability and criterion validity. Methods The final development for the GREET checklist incorporated the results of a prior systematic review and Delphi survey. Thirty-nine items, including all items from the prior systematic review, were proposed for inclusion in the GREET checklist. These 39 items were considered over a series of consensus discussions to determine the inclusion of items in the GREET checklist. The GREET checklist and explanatory paper were then developed and underwent psychometric testing with tertiary health professional students who evaluated the completeness of the reporting in a published study using the GREET checklist. For each GREET checklist item, consistency (%) of agreement both between participants and the consensus criterion reference measure were calculated. Criterion validity and inter-rater reliability were analysed using intra-class correlation coefficients (ICC). Results Three consensus discussions were undertaken, with 14 items identified for inclusion in the GREET checklist. Following further expert review by the Delphi panelists, three items were added and minor wording changes were completed, resulting in 17 checklist items. Psychometric testing for the updated GREET checklist was completed by 31 participants (n = 11 undergraduate, n = 20 postgraduate). The consistency of agreement between the participant ratings for completeness of reporting with the consensus criterion ratings ranged from 19 % for item 4 Steps of EBP, to 94 % for item 16 Planned delivery. The overall consistency of agreement, for criterion validity (ICC 0.73) and inter-rater reliability (ICC 0.96), was good to almost perfect. Conclusion The final GREET checklist comprises 17 items which are recommended for reporting EBP educational interventions. Further validation of the GREET checklist with experts in EBP research and education is recommended

    Protocol for development of the guideline for reporting evidence based practice educational interventions and teaching (GREET) statement

    Get PDF
    BACKGROUND: There are an increasing number of studies reporting the efficacy of educational strategies to facilitate the development of knowledge and skills underpinning evidence based practice (EBP). To date there is no standardised guideline for describing the teaching, evaluation, context or content of EBP educational strategies. The heterogeneity in the reporting of EBP educational interventions makes comparisons between studies difficult. The aim of this program of research is to develop the Guideline for Reporting EBP Educational interventions and Teaching (GREET) statement and an accompanying explanation and elaboration (E&E) paper. METHODS/DESIGN: Three stages are planned for the development process. Stage one will comprise a systematic review to identify features commonly reported in descriptions of EBP educational interventions. In stage two, corresponding authors of articles included in the systematic review and the editors of the journals in which these studies were published will be invited to participate in a Delphi process to reach consensus on items to be considered when reporting EBP educational interventions. The final stage of the project will include the development and pilot testing of the GREET statement and E&E paper. OUTCOME: The final outcome will be the creation of a Guideline for Reporting EBP Educational interventions and Teaching (GREET) statement and E&E paper. DISCUSSION: The reporting of health research including EBP educational research interventions, have been criticised for a lack of transparency and completeness. The development of the GREET statement will enable the standardised reporting of EBP educational research. This will provide a guide for researchers, reviewers and publishers for reporting EBP educational interventions

    A systematic review of how studies describe educational interventions for evidence-based practice:Stage 1 of the development of a reporting guideline

    Get PDF
    Abstract Background The aim of this systematic review was to identify which information is included when reporting educational interventions used to facilitate foundational skills and knowledge of evidence-based practice (EBP) training for health professionals. This systematic review comprised the first stage in the three stage development process for a reporting guideline for educational interventions for EBP. Methods The review question was ‘What information has been reported when describing educational interventions targeting foundational evidence-based practice knowledge and skills?’ MEDLINE, Academic Search Premier, ERIC, CINAHL, Scopus, Embase, Informit health, Cochrane Library and Web of Science databases were searched from inception until October - December 2011. Randomised and non-randomised controlled trials reporting original data on educational interventions specific to developing foundational knowledge and skills of evidence-based practice were included. Studies were not appraised for methodological bias, however, reporting frequency and item commonality were compared between a random selection of studies included in the systematic review and a random selection of studies excluded as they were not controlled trials. Twenty-five data items were extracted by two independent reviewers (consistency > 90%). Results Sixty-one studies met the inclusion criteria (n = 29 randomised, n = 32 non-randomised). The most consistently reported items were the learner’s stage of training, professional discipline and the evaluation methods used (100%). The least consistently reported items were the instructor(s) previous teaching experience (n = 8, 13%), and student effort outside face to face contact (n = 1, 2%). Conclusion This systematic review demonstrates inconsistencies in describing educational interventions for EBP in randomised and non-randomised trials. To enable educational interventions to be replicable and comparable, improvements in the reporting for educational interventions for EBP are required. In the absence of a specific reporting guideline, there are a range of items which are reported with variable frequency. Identifying the important items for describing educational interventions for facilitating foundational knowledge and skills in EBP remains to be determined. The findings of this systematic review will be used to inform the next stage in the development of a reporting guideline for educational interventions for EBP

    A Delphi survey to determine how educational interventions for evidence-based practice should be reported:Stage 2 of the development of a reporting guideline

    Get PDF
    BACKGROUND: Undertaking a Delphi exercise is recommended during the second stage in the development process for a reporting guideline. To continue the development for the Guideline for Reporting Evidence-based practice Educational interventions and Teaching (GREET) a Delphi survey was undertaken to determine the consensus opinion of researchers, journal editors and educators in evidence-based practice (EBP) regarding the information items that should be reported when describing an educational intervention for EBP. METHODS: A four round online Delphi survey was conducted from October 2012 to March 2013. The Delphi panel comprised international researchers, educators and journal editors in EBP. Commencing with an open-ended question, participants were invited to volunteer information considered important when reporting educational interventions for EBP. Over three subsequent rounds participants were invited to rate the importance of each of the Delphi items using an 11 point Likert rating scale (low 0 to 4, moderate 5 to 6, high 7 to 8 and very high >8). Consensus agreement was set a priori as at least 80 per cent participant agreement. Consensus agreement was initially calculated within the four categories of importance (low to very high), prior to these four categories being merged into two (<7 and ≥7). Descriptive statistics for each item were computed including the mean Likert scores, standard deviation (SD), range and median participant scores. Mean absolute deviation from the median (MAD-M) was also calculated as a measure of participant disagreement. RESULTS: Thirty-six experts agreed to participate and 27 (79%) participants completed all four rounds. A total of 76 information items were generated across the four survey rounds. Thirty-nine items (51%) were specific to describing the intervention (as opposed to other elements of study design) and consensus agreement was achieved for two of these items (5%). When the four rating categories were merged into two (<7 and ≥7), 18 intervention items achieved consensus agreement. CONCLUSION: This Delphi survey has identified 39 items for describing an educational intervention for EBP. These Delphi intervention items will provide the groundwork for the subsequent consensus discussion to determine the final inclusion of items in the GREET, the first reporting guideline for educational interventions in EBP

    A descriptive analysis of child-relevant systematic reviews in the Cochrane Database of Systematic Reviews

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Systematic reviews (SRs) are considered an important tool for decision-making. There has been no recent comprehensive identification or description of child-relevant SRs. A description of existing child-relevant SRs would help to identify the extent of available child-relevant evidence available in SRs and gaps in the evidence base where SRs are required. The objective of this study was to describe child-relevant SRs from the Cochrane Database of Systematic Reviews (CDSR, Issue 2, 2009).</p> <p>Methods</p> <p>SRs were assessed for relevance using pre-defined criteria. Data were extracted and entered into an electronic form. Univariate analyses were performed to describe the SRs overall and by topic area.</p> <p>Results</p> <p>The search yielded 1666 SRs; 793 met the inclusion criteria. 38% of SRs were last assessed as up-to-date prior to 2007. Corresponding authors were most often from the UK (41%). Most SRs (59%) examined pharmacological interventions. 53% had at least one external source of funding. SRs included a median of 7 studies (IQR 3, 15) and 679 participants (IQR 179, 2833). Of all studies, 48% included only children, and 27% only adults. 94% of studies were published in peer-reviewed journals. Primary outcomes were specified in 72% of SRs. Allocation concealment and the Jadad scale were used in 97% and 25% of SRs, respectively. Adults and children were analyzed separately in 12% of SRs and as a subgroup analysis in 14%. Publication bias was assessed in only 14% of SRs. A meta-analysis was conducted in 68% of SRs with a median of 5 trials (IQR 3, 9) each. Variations in these characteristics were observed across topic areas.</p> <p>Conclusions</p> <p>We described the methodological characteristics and rigour of child-relevant reviews in the CDSR. Many SRs are not up-to-date according to Cochrane criteria. Our study describes variation in conduct and reporting across SRs and reveals clinicians' ability to access child-specific data.</p

    Do health care institutions value research? A mixed methods study of barriers and facilitators to methodological rigor in pediatric randomized trials

    Get PDF
    BACKGROUND: Pediatric randomized controlled trials (RCTs) are susceptible to a high risk of bias. We examined the barriers and facilitators that pediatric trialists face in the design and conduct of unbiased trials. METHODS: We used a mixed methods design, with semi-structured interviews building upon the results of a quantitative survey. We surveyed Canadian (n=253) and international (n=600) pediatric trialists regarding their knowledge and awareness of bias and their perceived barriers and facilitators in conducting clinical trials. We then interviewed 13 participants from different subspecialties and geographic locations to gain a more detailed description of how their experiences and attitudes towards research interacted with trial design and conduct. RESULTS: The survey response rate was 23.0% (186/807). 68.1% of respondents agreed that bias is a problem in pediatric RCTs and 72.0% felt that there is sufficient evidence to support changing some aspects of how trials are conducted. Knowledge related to bias was variable, with inconsistent awareness of study design features that may introduce bias into a study. Interview participants highlighted a lack of formal training in research methods, a negative research culture, and the pragmatics of trial conduct as barriers. Facilitators included contact with knowledgeable and supportive colleagues and infrastructure for research. CONCLUSIONS: A lack of awareness of bias and negative attitudes towards research present significant barriers in terms of conducting methodologically rigorous pediatric RCTs. Knowledge translation efforts must focus on these issues to ensure the relevance and validity of trial results

    Assessing the quality of reports of systematic reviews in pediatric complementary and alternative medicine

    Get PDF
    OBJECTIVE: To examine the quality of reports of complementary and alternative medicine (CAM) systematic reviews in the pediatric population. We also examined whether there were differences in the quality of reports of a subset of CAM reviews compared to reviews using conventional interventions. METHODS: We assessed the quality of reports of 47 CAM systematic reviews and 19 reviews evaluating a conventional intervention. The quality of each report was assessed using a validated 10-point scale. RESULTS: Authors were particularly good at reporting: eligibility criteria for including primary studies, combining the primary studies for quantitative analysis appropriately, and basing their conclusions on the data included in the review. Reviewers were weak in reporting: how they avoided bias in the selection of primary studies, and how they evaluated the validity of the primary studies. Overall the reports achieved 43% (median = 3) of their maximum possible total score. The overall quality of reporting was similar for CAM reviews and conventional therapy ones. CONCLUSIONS: Evidence based health care continues to make important contributions to the well being of children. To ensure the pediatric community can maximize the potential use of these interventions, it is important to ensure that systematic reviews are conducted and reported at the highest possible quality. Such reviews will be of benefit to a broad spectrum of interested stakeholders
    corecore