9 research outputs found

    Barriers and attitudes influencing non-engagement in a peer feedback model to inform evidence for GP appraisal

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The UK general practitioner (GP) appraisal system is deemed to be an inadequate source of performance evidence to inform a future medical revalidation process. A long-running voluntary model of external peer review in the west of Scotland provides feedback by trained peers on the standard of GP colleagues' core appraisal activities and may 'add value' in strengthening the robustness of the current system in support of revalidation. A significant minority of GPs has participated in the peer feedback model, but a clear majority has yet to engage with it. We aimed to explore the views of non-participants to identify barriers to engagement and attitudes to external peer review as a means to inform the current appraisal system.</p> <p>Methods</p> <p>We conducted semi-structured interviews with a sample of west of Scotland GPs who had yet to participate in the peer review model. A thematic analysis of the interview transcriptions was conducted using a constant comparative approach.</p> <p>Results</p> <p>13 GPs were interviewed of whom nine were males. Four core themes were identified in relation to the perceived and experienced 'value' placed on the topics discussed and their relevance to routine clinical practice and professional appraisal: 1. Value of the appraisal improvement activity. 2. Value of external peer review. 3. Value of the external peer review model and host organisation and 4. Attitudes to external peer review.</p> <p>Conclusions</p> <p>GPs in this study questioned the 'value' of participation in the external peer review model and the national appraisal system over the standard of internal feedback received from immediate work colleagues. There was a limited understanding of the concept, context and purpose of external peer review and some distrust of the host educational provider. Future engagement with the model by these GPs is likely to be influenced by policy to improve the standard of appraisal and contractual related activities, rather than a self-directed recognition of learning needs.</p

    A review of significant events analysed in general practice: implications for the quality and safety of patient care

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Significant event analysis (SEA) is promoted as a team-based approach to enhancing patient safety through reflective learning. Evidence of SEA participation is required for appraisal and contractual purposes in UK general practice. A voluntary educational model in the west of Scotland enables general practitioners (GPs) and doctors-in-training to submit SEA reports for feedback from trained peers. We reviewed reports to identify the range of safety issues analysed, learning needs raised and actions taken by GP teams.</p> <p>Method</p> <p>Content analysis of SEA reports submitted in an 18 month period between 2005 and 2007.</p> <p>Results</p> <p>191 SEA reports were reviewed. 48 described patient harm (25.1%). A further 109 reports (57.1%) outlined circumstances that had the potential to cause patient harm. Individual 'error' was cited as the most common reason for event occurrence (32.5%). Learning opportunities were identified in 182 reports (95.3%) but were often non-specific professional issues not shared with the wider practice team. 154 SEA reports (80.1%) described actions taken to improve practice systems or professional behaviour. However, non-medical staff were less likely to be involved in the changes resulting from event analyses describing patient harm (p < 0.05)</p> <p>Conclusion</p> <p>The study provides some evidence of the potential of SEA to improve healthcare quality and safety. If applied rigorously, GP teams and doctors in training can use the technique to investigate and learn from a wide variety of quality issues including those resulting in patient harm. This leads to reported change but it is unclear if such improvement is sustained.</p

    Significant events in general practice: issues involved in grading, reporting, analyses and peer review

    Get PDF
    General practitioners (GPs) and their teams in the United Kingdom (UK) are encouraged to identify and analyse significant health care events. Additionally, there is an expectation that specific significant events should be notified to reporting and learning systems where these exist. Policy initiatives – such as clinical governance, GP appraisal and the new General Medical Services (nGMS) contract - attempt to ensure that significant event analysis (SEA) is a frequent educational activity for GP teams. The presumption from policymakers and healthcare authorities is that GP teams are demonstrating a commitment to reflect on, learn from and resolve issues which impact on the quality and safety of patient care. However, there is minimal evidence to support these assumptions while there is no uniform mechanism to ensure consistency in the quality assurance of SEA reports. One potential method of enhancing both the learning from and the quality of SEA is through peer review. In the west of Scotland an educational model to facilitate the peer review of SEA reports has existed since 1998. However, knowledge and understanding of the role and impact of this process are limited. With the potential of peer review of SEA to contribute to GP appraisal and the nGMS contract, there was a need to develop a more evidence-based approach to the peer review of SEA. The main aims of this thesis therefore are: • To identify and explore the issues involved if the identification, analysis and reporting of significant events are to be associated with quality improvement in general practice. • To investigate whether a peer feedback model can enhance the value of SEA so that its potential as a reflective learning technique can be maximised within the current educational and contractual requirements for GPs. To achieve these aims a series of mixed-methods research studies was undertaken: To examine attitudes to the identification and reporting of significant events a postal questionnaire survey of 617 GP principals in NHS Greater Glasgow was undertaken. Of the 466 (76%) individuals who responded, 81 (18%) agreed that the reporting of such events should be mandatory while 317 (73%) indicated that they would be selective in what they notified to a potential reporting system. Any system was likely to be limited by a difficulty for many GPs (41%) in determining when an event was ‘significant.’ To examine levels of agreement on the grading, analysis and reporting of standardised significant events scenarios between different west of Scotland GP groups (e.g. GP appraisers, GP registrar trainers, SEA peer reviewers) a further postal questionnaire survey was conducted. 122 GPs (77%) responded. No difference was found between the groups in the grading severity of significant events scenarios (range of p values = 0.30-0.79). Increased grading severity was linked to the willingness of each group to analyse and report that event (p<0.05). The strong levels of agreement suggest that GPs can prioritise relevant significant events for formal analysis and reporting. To identify the range of patient safety issues addressed, learning needs raised and actions taken by GP teams, a sample of 191 SEA reports submitted to the west of Scotland peer review model were subjected to content analysis. 48 (25%) described incidents in which patients were harmed. A further 109 reports (57%) outlined circumstances which had the potential to cause patient harm. Learning opportunities were identified in 182 reports (95%) but were often non-specific professional issues such as general diagnosis and management of patients or communication issues within the practice team. 154 (80%) described actions taken to improve practice systems or professional behaviour. Overall, the study provided some proxy evidence of the potential of SEA to improve healthcare quality and safety. To improve the quality of SEA peer review a more detailed instrument was developed and tested for aspects of its validity and reliability. Content validity was quantified by application of a content validity index and was demonstrated, with at least 8 out of 10 experts endorsing all 10 items of the proposed instrument. Reliability testing involved numerical marking exercises of 20 SEA reports by 20 trained SEA peer reviewers. Generalisability (G) theory was used to investigate the ability of the instrument to discriminate among SEA reports. The overall instrument G co-efficient was moderate to good (G=0.73), indicating that it can provide consistent information on the standard achieved by individual reports. There was moderate inter-rater reliability (G=0.64) when four raters were used to judge SEA quality. After further training of reviewers, inter-rater reliability improved to G>0.8, with a decision study indicating that two reviewers analysing the same report would give the model sufficient reliability for the purposes of formative assessment. In a pilot study to examine the potential of NHS clinical audit specialists to give feedback on SEA reports using the updated review instrument, a comparison of the numerical grading given to reports by this group and established peer reviewers was undertaken. Both groups gave similar feedback scores when judging the reports (p=0.14), implying that audit specialists could potentially support this system. To investigate the acceptability and educational impact associated with a peer reviewed SEA report, semi-structured interviews were undertaken with nine GPs who had participated in the model. The findings suggested that external peer feedback is acceptable to participants and enhanced the appraisal process. This feedback resulted in the imparting of technical knowledge on how to analyse significant events. Suggestions to enhance the educational gain from the process were given, such as prompting reviewers to offer advice on how they would address the specific significant event described. There was disagreement over whether this type of feedback could or should be used as supporting evidence of the quality of doctors’ work to educational and regulatory authorities. In a focus group study to explore the experiences of GP peer reviewers it was found that acting as a reviewer was perceived to be an important professional duty. Consensus on the value of feedback in improving SEA attempts by colleagues was apparent but there was disagreement and discomfort about making a “satisfactory” or an “unsatisfactory” judgement. Some concern was expressed about professional and legal obligations to colleagues and to patients seriously harmed as a result of significant events. Regular training of peer reviewers was thought to be integral to maintaining their skills. The findings presented contribute to the limited evidence on the analysis and reporting of significant events in UK general practice. Additionally, aspects of the utility of the peer review model outlined were investigated and support its potential to enhance the application of SEA. The issues identified and the interpretation of findings could inform GPs, professional bodies and healthcare organisations of some of the strengths and limitations of SEA and the aligned educational peer review model

    External feedback in general practice: a focus group study of trained peer reviewers of significant event analyses

    No full text
    Background and aims  Peer feedback is well placed to play a key role in satisfying educational and governance standards in general practice. Although the participation of general practitioners (GPs) as reviewers of evidence will be crucial to the process, the professional, practical and emotional issues associated with peer review are largely unknown. This study explored the experiences of GP reviewers who make educational judgements on colleagues' significant event analyses (SEAs) in an established peer feedback system. Methods  Focus groups of trained GP peer reviewers in the west of Scotland. Interviews were taped, transcribed and analysed for content. Results  Consensus on the value of feedback in improving SEA attempts by colleagues was apparent, but there was disagreement and discomfort about making a dichotomous `satisfactory' or `unsatisfactory' judgement. Differing views on how peer feedback should be used to compliment the appraisal process were described. Some concern was expressed about professional and legal obligations to colleagues and to patients seriously harmed as a result of significant events. Regular training of peer reviewers using several different educational methods was thought essential in enhancing or maintaining their skills. Involvement of the participants in the development of the feedback instrument and the peer review system was highly valued and motivating. Conclusions  Acting as a peer reviewer is perceived by this group of GPs to be an important professional duty. However, the difficulties, emotions and tensions they experience when making professional judgements on aspects of colleagues' work need to be considered when developing a feasible and rigorous system of educational feedback. This is especially important if peer review is to facilitate the `external verification' of evidence for appraisal and governance

    Judging the quality of clinical audit by general practitioners: a pilot study comparing the assessments of medical peers and NHS audit specialists

    No full text
    Clinical audit informs general practitioner (GP) appraisal and will provide evidence of performance for revalidation in the UK. However, objective evidence is now required. An established peer assessment system may offer an educational solution for making objective judgements on clinical audit quality. National Health Service (NHS) clinical audit specialists could potentially support this system if their audit assessments were comparable with established medical peer assessors. The study aimed to quantify differences between clinical audit specialists and medical peer assessors in their assessments of clinical audit projects. A comparison study of the assessment outcomes of clinical audit reports by two groups using appropriate assessment instruments was conducted. Mean scores were compared and 95% confidence intervals (CIs) and limits of agreement calculated. A two-point mean difference would be relevant. Twelve significant event analysis (SEA) reports and 12 criterion audit projects were assessed by 11 experienced GP assessors and 10 NHS audit specialist novice assessors. For SEA, the mean score difference between groups was &lt; 1.0. The 95% CI for bias was -0.1 to 0.5 (P = 0.14). Limits of agreement ranged from -0.7 to 1.2. For criterion audit, a mean score difference of &lt;= 1.0 was calculated for seven projects and scores between 1.1 and 1.9 for four. The 95% CI for bias was 0.8 to 1.5 (P &lt; 0.001). Limits of agreement ranged from -2.5 to -0.0. The study findings suggest that a sample of NHS clinical audit specialists can give numerically accurate feedback scores to GPs on the quality of their clinical audit activity compared with established peer assessors as part of the model outlined
    corecore