5,117 research outputs found

    Systems validation: application to statistical programs

    Get PDF
    BACKGROUND: In 2003, the United States Food and Drug Administration (FDA) released a guidance document on the scope of "Part 11" enforcement. In this guidance document, the FDA indicates an expectation of a risk-based approach to determining which systems should undergo validation. Since statistical programs manage and manipulate raw data, their implementation should be critically reviewed to determine whether or not they should undergo validation. However, the concepts of validation are not often discussed in biostatistics curriculum. DISCUSSION: This paper summarizes a "Plan, Do, Say" approach to validation that can be incorporated into statistical training so that biostatisticians can understand and implement validation principles in their research. SUMMARY: Validation is a process that requires dedicated attention. The process of validation can be easily understood in the context of the scientific method

    The ADEA at the Top of the Food Chain: Who\u27s Protecting the Higher-Salaried Employees?

    Get PDF
    After the ruling in Hazen Paper, older workers who are terminated will have a more difficult burden proving their termination was based on age. Previously used disparate impact and disparate treatment theories will be more difficult to utilize, and employers may be allowed wider use of factors other that age as reasons for termination, without the fear that these factors will be considered age proxies

    Development and validation of the guideline for reporting evidence-based practice educational interventions and teaching (GREET)

    Get PDF
    Abstract Background The majority of reporting guidelines assist researchers to report consistent information concerning study design, however, they contain limited information for describing study interventions. Using a three-stage development process, the Guideline for Reporting Evidence-based practice Educational interventions and Teaching (GREET) checklist and accompanying explanatory paper were developed to provide guidance for the reporting of educational interventions for evidence-based practice (EBP). The aim of this study was to complete the final development for the GREET checklist, incorporating psychometric testing to determine inter-rater reliability and criterion validity. Methods The final development for the GREET checklist incorporated the results of a prior systematic review and Delphi survey. Thirty-nine items, including all items from the prior systematic review, were proposed for inclusion in the GREET checklist. These 39 items were considered over a series of consensus discussions to determine the inclusion of items in the GREET checklist. The GREET checklist and explanatory paper were then developed and underwent psychometric testing with tertiary health professional students who evaluated the completeness of the reporting in a published study using the GREET checklist. For each GREET checklist item, consistency (%) of agreement both between participants and the consensus criterion reference measure were calculated. Criterion validity and inter-rater reliability were analysed using intra-class correlation coefficients (ICC). Results Three consensus discussions were undertaken, with 14 items identified for inclusion in the GREET checklist. Following further expert review by the Delphi panelists, three items were added and minor wording changes were completed, resulting in 17 checklist items. Psychometric testing for the updated GREET checklist was completed by 31 participants (n = 11 undergraduate, n = 20 postgraduate). The consistency of agreement between the participant ratings for completeness of reporting with the consensus criterion ratings ranged from 19 % for item 4 Steps of EBP, to 94 % for item 16 Planned delivery. The overall consistency of agreement, for criterion validity (ICC 0.73) and inter-rater reliability (ICC 0.96), was good to almost perfect. Conclusion The final GREET checklist comprises 17 items which are recommended for reporting EBP educational interventions. Further validation of the GREET checklist with experts in EBP research and education is recommended

    Protocol for development of the guideline for reporting evidence based practice educational interventions and teaching (GREET) statement

    Get PDF
    BACKGROUND: There are an increasing number of studies reporting the efficacy of educational strategies to facilitate the development of knowledge and skills underpinning evidence based practice (EBP). To date there is no standardised guideline for describing the teaching, evaluation, context or content of EBP educational strategies. The heterogeneity in the reporting of EBP educational interventions makes comparisons between studies difficult. The aim of this program of research is to develop the Guideline for Reporting EBP Educational interventions and Teaching (GREET) statement and an accompanying explanation and elaboration (E&E) paper. METHODS/DESIGN: Three stages are planned for the development process. Stage one will comprise a systematic review to identify features commonly reported in descriptions of EBP educational interventions. In stage two, corresponding authors of articles included in the systematic review and the editors of the journals in which these studies were published will be invited to participate in a Delphi process to reach consensus on items to be considered when reporting EBP educational interventions. The final stage of the project will include the development and pilot testing of the GREET statement and E&E paper. OUTCOME: The final outcome will be the creation of a Guideline for Reporting EBP Educational interventions and Teaching (GREET) statement and E&E paper. DISCUSSION: The reporting of health research including EBP educational research interventions, have been criticised for a lack of transparency and completeness. The development of the GREET statement will enable the standardised reporting of EBP educational research. This will provide a guide for researchers, reviewers and publishers for reporting EBP educational interventions

    A systematic review of how studies describe educational interventions for evidence-based practice:Stage 1 of the development of a reporting guideline

    Get PDF
    Abstract Background The aim of this systematic review was to identify which information is included when reporting educational interventions used to facilitate foundational skills and knowledge of evidence-based practice (EBP) training for health professionals. This systematic review comprised the first stage in the three stage development process for a reporting guideline for educational interventions for EBP. Methods The review question was β€˜What information has been reported when describing educational interventions targeting foundational evidence-based practice knowledge and skills?’ MEDLINE, Academic Search Premier, ERIC, CINAHL, Scopus, Embase, Informit health, Cochrane Library and Web of Science databases were searched from inception until October - December 2011. Randomised and non-randomised controlled trials reporting original data on educational interventions specific to developing foundational knowledge and skills of evidence-based practice were included. Studies were not appraised for methodological bias, however, reporting frequency and item commonality were compared between a random selection of studies included in the systematic review and a random selection of studies excluded as they were not controlled trials. Twenty-five data items were extracted by two independent reviewers (consistency > 90%). Results Sixty-one studies met the inclusion criteria (n = 29 randomised, n = 32 non-randomised). The most consistently reported items were the learner’s stage of training, professional discipline and the evaluation methods used (100%). The least consistently reported items were the instructor(s) previous teaching experience (n = 8, 13%), and student effort outside face to face contact (n = 1, 2%). Conclusion This systematic review demonstrates inconsistencies in describing educational interventions for EBP in randomised and non-randomised trials. To enable educational interventions to be replicable and comparable, improvements in the reporting for educational interventions for EBP are required. In the absence of a specific reporting guideline, there are a range of items which are reported with variable frequency. Identifying the important items for describing educational interventions for facilitating foundational knowledge and skills in EBP remains to be determined. The findings of this systematic review will be used to inform the next stage in the development of a reporting guideline for educational interventions for EBP

    A Delphi survey to determine how educational interventions for evidence-based practice should be reported:Stage 2 of the development of a reporting guideline

    Get PDF
    BACKGROUND: Undertaking a Delphi exercise is recommended during the second stage in the development process for a reporting guideline. To continue the development for the Guideline for Reporting Evidence-based practice Educational interventions and Teaching (GREET) a Delphi survey was undertaken to determine the consensus opinion of researchers, journal editors and educators in evidence-based practice (EBP) regarding the information items that should be reported when describing an educational intervention for EBP. METHODS: A four round online Delphi survey was conducted from October 2012 to March 2013. The Delphi panel comprised international researchers, educators and journal editors in EBP. Commencing with an open-ended question, participants were invited to volunteer information considered important when reporting educational interventions for EBP. Over three subsequent rounds participants were invited to rate the importance of each of the Delphi items using an 11 point Likert rating scale (low 0 to 4, moderate 5 to 6, high 7 to 8 and very high >8). Consensus agreement was set a priori as at least 80 per cent participant agreement. Consensus agreement was initially calculated within the four categories of importance (low to very high), prior to these four categories being merged into two (<7 and β‰₯7). Descriptive statistics for each item were computed including the mean Likert scores, standard deviation (SD), range and median participant scores. Mean absolute deviation from the median (MAD-M) was also calculated as a measure of participant disagreement. RESULTS: Thirty-six experts agreed to participate and 27 (79%) participants completed all four rounds. A total of 76 information items were generated across the four survey rounds. Thirty-nine items (51%) were specific to describing the intervention (as opposed to other elements of study design) and consensus agreement was achieved for two of these items (5%). When the four rating categories were merged into two (<7 and β‰₯7), 18 intervention items achieved consensus agreement. CONCLUSION: This Delphi survey has identified 39 items for describing an educational intervention for EBP. These Delphi intervention items will provide the groundwork for the subsequent consensus discussion to determine the final inclusion of items in the GREET, the first reporting guideline for educational interventions in EBP

    A surveillance system to assess the need for updating systematic reviews.

    Get PDF
    BackgroundSystematic reviews (SRs) can become outdated as new evidence emerges over time. Organizations that produce SRs need a surveillance method to determine when reviews are likely to require updating. This report describes the development and initial results of a surveillance system to assess SRs produced by the Agency for Healthcare Research and Quality (AHRQ) Evidence-based Practice Center (EPC) Program.MethodsTwenty-four SRs were assessed using existing methods that incorporate limited literature searches, expert opinion, and quantitative methods for the presence of signals triggering the need for updating. The system was designed to begin surveillance six months after the release of the original review, and then ceforth every six months for any review not classified as being a high priority for updating. The outcome of each round of surveillance was a classification of the SR as being low, medium or high priority for updating.ResultsTwenty-four SRs underwent surveillance at least once, and ten underwent surveillance a second time during the 18 months of the program. Two SRs were classified as high, five as medium, and 17 as low priority for updating. The time lapse between the searches conducted for the original reports and the updated searches (search time lapse - STL) ranged from 11 months to 62 months: The STL for the high priority reports were 29 months and 54 months; those for medium priority reports ranged from 19 to 62 months; and those for low priority reports ranged from 11 to 33 months. Neither the STL nor the number of new relevant articles was perfectly associated with a signal for updating. Challenges of implementing the surveillance system included determining what constituted the actual conclusions of an SR that required assessing; and sometimes poor response rates of experts.ConclusionIn this system of regular surveillance of 24 systematic reviews on a variety of clinical interventions produced by a leading organization, about 70% of reviews were determined to have a low priority for updating. Evidence suggests that the time period for surveillance is yearly rather than the six months used in this project

    Are bisphosphonates effective in the treatment of osteoarthritis pain? A meta-analysis and systematic review.

    Get PDF
    Osteoarthritis (OA) is the most common form of arthritis worldwide. Pain and reduced function are the main symptoms in this prevalent disease. There are currently no treatments for OA that modify disease progression; therefore analgesic drugs and joint replacement for larger joints are the standard of care. In light of several recent studies reporting the use of bisphosphonates for OA treatment, our work aimed to evaluate published literature to assess the effectiveness of bisphosphonates in OA treatment

    Evidence Synthesis for Complex Interventions Using Meta-Regression Models

    Get PDF
    This study was funded by the Canadian Institutes of Health Research (grants FDN-143269 and FRN-123345) and a research fellowship held by K.J.K. (Frederick Banting and Charles Best Canada Graduate Scholarship GSD-134936). N.M.I. holds a Canada Research Chair (Tier 2) in Implementation of Evidence Based Practice and a Clinician Scientist Award from the Department of Family and Community Medicine at the University of Toronto (Toronto, Ontario, Canada). J.M.G. held a Canada Research Chair in Health Knowledge Transfer and Uptake during the time of the study’s conduct and was supported by a Foundation Grant from the Canadian Institutes of Health Research. D.M. was supported by a University of Ottawa Research Chair during the time of study conduct.Peer reviewedPublisher PD

    Evidence summaries: the evolution of a rapid review approach

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Rapid reviews have emerged as a streamlined approach to synthesizing evidence - typically for informing emergent decisions faced by decision makers in health care settings. Although there is growing use of rapid review 'methods', and proliferation of rapid review products, there is a dearth of published literature on rapid review methodology. This paper outlines our experience with rapidly producing, publishing and disseminating evidence summaries in the context of our Knowledge to Action (KTA) research program.</p> <p>Methods</p> <p>The KTA research program is a two-year project designed to develop and assess the impact of a regional knowledge infrastructure that supports evidence-informed decision making by regional managers and stakeholders. As part of this program, we have developed evidence summaries - our form of rapid review - which have come to be a flagship component of this project. Our eight-step approach for producing evidence summaries has been developed iteratively, based on evidence (where available), experience and knowledge user feedback. The aim of our evidence summary approach is to deliver quality evidence that is both timely and user-friendly.</p> <p>Results</p> <p>From November 2009 to March 2011 we have produced 11 evidence summaries on a diverse range of questions identified by our knowledge users. Topic areas have included questions of clinical effectiveness to questions on health systems and/or health services. Knowledge users have reported evidence summaries to be of high value in informing their decisions and initiatives. We continue to experiment with incorporating more of the established methods of systematic reviews, while maintaining our capacity to deliver a final product in a timely manner.</p> <p>Conclusions</p> <p>The evolution of the KTA rapid review evidence summaries has been a positive one. We have developed an approach that appears to be addressing a need by knowledge users for timely, user-friendly, and trustworthy evidence and have transparently reported these methods here for the wider rapid review and scientific community.</p
    • …
    corecore