29 research outputs found

    Reliability and validity of the AGREE instrument used by physical therapists in assessment of clinical practice guidelines

    Get PDF
    BACKGROUND: The AGREE instrument has been validated for evaluating Clinical Practice Guidelines (CPG) pertaining to medical care. This study evaluated the reliability and validity of physical therapists using the AGREE to assess quality of CPGs relevant to physical therapy practice. METHODS: A total of 69 physical therapists participated and were classified as generalists, specialist or researchers. Pairs of appraisers within each category evaluated independently, a set of 6 CPG selected at random from a pool of 55 CPGs. RESULTS: Reliability between pairs of appraisers indicated low to high reliability depending on the domain and number of appraisers (0.17–0.81 for single appraiser; 0.30–0.96 when score averaged across a pair of appraisers). The highest reliability was achieved for Rigour of Development, which exceeded ICC> 0.79, if scores from pairs of appraisers were pooled. Adding more than 3 appraisers did not consistently improve reliability. Appraiser type did not determine reliability scores. End-users, including study participants and a separate sample of 102 physical therapy students, found the AGREE useful to guide critical appraisal. The construct validity of the AGREE was supported in that expected differences on Rigour of Development domains were observed between expert panels versus those with no/uncertain expertise (differences of 10–21% p = 0.09–0.001). Factor analysis with varimax rotation, produced a 4-factor solution that was similar, although not in exact agreement with the AGREE Domains. Validity was also supported by the correlation observed (Kendall-tao = 0.69) between Overall Assessment and the Rigour of Development domain. CONCLUSION: These findings suggest that the AGREE instrument is reliable and valid when used by physiotherapists to assess the quality of CPG pertaining to physical therapy health services

    Development of evidence-based clinical practice guidelines (CPGs): comparing approaches

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>While the potential of clinical practice guidelines (CPGs) to support implementation of evidence has been demonstrated, it is not currently being achieved. CPGs are both poorly developed and ineffectively implemented. To improve clinical practice and health outcomes, both well-developed CPGs and effective methods of CPG implementation are needed. We sought to establish whether there is agreement on the fundamental characteristics of an evidence-based CPG development process and to explore whether the level of guidance provided in CPG development handbooks is sufficient for people using these handbooks to be able to apply it.</p> <p>Methods</p> <p>CPG development handbooks were identified through a broad search of published and grey literature. Documents published in English produced by national or international organisations purporting to support development of evidence-based CPGs were included. A list of 14 key elements of a CPG development process was developed. Two authors read each handbook. For each handbook a judgement was made as to how it addressed each element; assigned as: 'mentioned and clear guidance provided', 'mentioned but limited practical detail provided ', or 'not mentioned'.</p> <p>Results</p> <p>Six CPG development handbooks were included. These were produced by the Council of Europe, the National Health and Medical Research Council of Australia, the National Institute for Health and Clinical Excellence in the UK, the New Zealand Guidelines Group, the Scottish Intercollegiate Guideline Network, and the World Health Organization (WHO).</p> <p>There was strong concordance between the handbooks on the key elements of an evidence-based CPG development process. All six of the handbooks require and provide guidance on establishment of a multidisciplinary guideline development group, involvement of consumers, identification of clinical questions or problems, systematic searches for and appraisal of research evidence, a process for drafting recommendations, consultation with others beyond the guideline development group, and ongoing review and updating of the CPG.</p> <p>Conclusion</p> <p>The key elements of an evidence-based CPG development process are addressed with strong concordance by existing CPG development handbooks. Further research is required to determine why these key elements are often not addressed by CPG developers.</p

    The GuideLine Implementability Appraisal (GLIA): development of an instrument to identify obstacles to guideline implementation

    Get PDF
    BACKGROUND: Clinical practice guidelines are not uniformly successful in influencing clinicians' behaviour toward best practices. Implementability refers to a set of characteristics that predict ease of (and obstacles to) guideline implementation. Our objective is to develop and validate a tool for appraisal of implementability of clinical guidelines. METHODS: Indicators of implementability were identified from the literature and used to create items and dimensions of the GuideLine Implementability Appraisal (GLIA). GLIA consists of 31 items, arranged into 10 dimensions. Questions from 9 of the 10 dimensions are applied individually to each recommendation of the guideline. Decidability and Executability are critical dimensions. Other dimensions are Global, Presentation and Formatting, Measurable Outcomes, Apparent Validity, Flexibility, Effect on Process of Care, Novelty/Innovation, and Computability. We conducted a series of validation activities, including validation of the construct of implementability, expert review of content for clarity, relevance, and comprehensiveness, and assessment of construct validity of the instrument. Finally, GLIA was applied to a draft guideline under development by national professional societies. RESULTS: Evidence of content validity and preliminary support for construct validity were obtained. The GLIA proved to be useful in identifying barriers to implementation in the draft guideline and the guideline was revised accordingly. CONCLUSION: GLIA may be useful to guideline developers who can apply the results to remedy defects in their guidelines. Likewise, guideline implementers may use GLIA to select implementable recommendations and to devise implementation strategies that address identified barriers. By aiding the design and operationalization of highly implementable guidelines, our goal is that application of GLIA may help to improve health outcomes, but further evaluation will be required to support this potential benefit

    Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews

    Get PDF
    BACKGROUND: Our objective was to develop an instrument to assess the methodological quality of systematic reviews, building upon previous tools, empirical evidence and expert consensus. METHODS: A 37-item assessment tool was formed by combining 1) the enhanced Overview Quality Assessment Questionnaire (OQAQ), 2) a checklist created by Sacks, and 3) three additional items recently judged to be of methodological importance. This tool was applied to 99 paper-based and 52 electronic systematic reviews. Exploratory factor analysis was used to identify underlying components. The results were considered by methodological experts using a nominal group technique aimed at item reduction and design of an assessment tool with face and content validity. RESULTS: The factor analysis identified 11 components. From each component, one item was selected by the nominal group. The resulting instrument was judged to have face and content validity. CONCLUSION: A measurement tool for the 'assessment of multiple systematic reviews' (AMSTAR) was developed. The tool consists of 11 items and has good face and content validity for measuring the methodological quality of systematic reviews. Additional studies are needed with a focus on the reproducibility and construct validity of AMSTAR, before strong recommendations can be made on its use

    Adapting a generic tuberculosis control operational guideline and scaling it up in China: a qualitative case study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The TB operational guideline (the <it>deskguide</it>) is a detailed action guide for county TB doctors aiming to improve the quality of DOTS, while the China national TB policy guide is a guide to TB control that is comprehensive but lacks operational usability for frontline TB doctors. This study reports the process of deskguide adaptation, its scale-up and lessons learnt for policy implications.</p> <p>Methods</p> <p>The deskguide was translated, reviewed, and revised in a working group process. Details of the eight adaptation steps are reported here. An operational study was embedded in the adaptation process. Two comparable prefectures were chosen as pilot and control sites in each of two participating provinces. In the pilot sites, the deskguide was used with the national policy guide in routine in-service training and supervisory trips; while in the control sites, only the national policy guide was used. In-depth interviews and focus groups were conducted with 16 county TB doctors, 16 township doctors, 17 village doctors, 63 TB patients and 57 patient family members. Following piloting, the deskguide was incorporated into the national TB guidelines for county TB dispensary use.</p> <p>Results</p> <p>Qualitative research identified that the deskguide was useful in the daily practice of county TB doctors. Patients in the pilot sites had a better knowledge of TB and better treatment support compared with those in the control sites.</p> <p>Conclusion</p> <p>The adaptation process highlighted a number of general strategies to adapt generic guidelines into country specific ones: 1) local policy-makers and practitioners should have a leading role; 2) a systematic working process should be employed with capable focal persons; and 3) the guideline should be embedded within the current programmes so it is sustainable and replicable for further scale-up.</p

    Assessing the Quality of Decision Support Technologies Using the International Patient Decision Aid Standards instrument (IPDASi)

    Get PDF
    Objectives To describe the development, validation and inter-rater reliability of an instrument to measure the quality of patient decision support technologies (decision aids). Design Scale development study, involving construct, item and scale development, validation and reliability testing. Setting There has been increasing use of decision support technologies – adjuncts to the discussions clinicians have with patients about difficult decisions. A global interest in developing these interventions exists among both for-profit and not-for-profit organisations. It is therefore essential to have internationally accepted standards to assess the quality of their development, process, content, potential bias and method of field testing and evaluation. Methods Scale development study, involving construct, item and scale development, validation and reliability testing. Participants Twenty-five researcher-members of the International Patient Decision Aid Standards Collaboration worked together to develop the instrument (IPDASi). In the fourth Stage (reliability study), eight raters assessed thirty randomly selected decision support technologies. Results IPDASi measures quality in 10 dimensions, using 47 items, and provides an overall quality score (scaled from 0 to 100) for each intervention. Overall IPDASi scores ranged from 33 to 82 across the decision support technologies sampled (n = 30), enabling discrimination. The inter-rater intraclass correlation for the overall quality score was 0.80. Correlations of dimension scores with the overall score were all positive (0.31 to 0.68). Cronbach's alpha values for the 8 raters ranged from 0.72 to 0.93. Cronbach's alphas based on the dimension means ranged from 0.50 to 0.81, indicating that the dimensions, although well correlated, measure different aspects of decision support technology quality. A short version (19 items) was also developed that had very similar mean scores to IPDASi and high correlation between short score and overall score 0.87 (CI 0.79 to 0.92). Conclusions This work demonstrates that IPDASi has the ability to assess the quality of decision support technologies. The existing IPDASi provides an assessment of the quality of a DST's components and will be used as a tool to provide formative advice to DSTs developers and summative assessments for those who want to compare their tools against an existing benchmark
    corecore