11 research outputs found

    Using standard setters to improve reliability for the Objective Structured Clinical Examination (OSCE) in clinical year medical students

    Get PDF
    Objective: OSCE has been shown to be a reliable method of assessing both clinical competency and higher levels of clinical reasoning. The aim of the study was to assess the correlation of marking between the examiner and the standard setter in the clinical year OSCE. Methods: The School examined 110 graduate-entry medical students in Year 3 (2010) using OSCE across 2 sites. There were 8 stations examining: history taking, clinical examination skills on simulated patients, real patients or plastic models with a total examination time of 100 minutes. The content of each station had been standardized by a multi-disciplinary panel. A detailed criterion-based marking assessing various clinical competencies was employed. Participating examiners and standard setters were trained before the examination. An examiner and a standard setter were assigned to each station. The two markers did not confer or discussion in the process. The Intra-Class Correlation (ICC) co-efficient between the examiner and standard setter, the overall reliability score (Cronbach’s alpha) were calculated using the SPSS® statistical package. Results: The ICC correlations between the examiner and standard setter were significant (p\u3c0.001) (Table 1). There was a low correlation in Station 7 at Melbourne School. The overall Cronbach’s alpha reliability score is 0.7. Conclusions: The significant correlations between examiners and standard setters revealed that pre-examination training and standardisation reduce inter-rater variability and improve overall reliability. The analysis could also pick up potential discrepancy in marking (eg Station 7) and appropriate score adjustment made. The standard setter also serves as a reserve/backup examiner

    Running Targeted Question Development Workshops to Improve the Competency of Clinicians and Quality of Written Assessment Questions for Clinical Year Medical Students

    Get PDF
    Objective: Medical schools have always strived to develop high quality assessment questions that have the capacity to test the high order thinking of clinical year medical students. While clinicians are enthusiastic in their contribution to writing these questions, they often lack confidence in their ability to write good quality items. The benefits of running targeted question development workshops have not been adequately investigated. The aim of the study was to evaluate whether targeted workshops will improve the confidence and competency of clinicians in writing assessment questions and therefore improve the overall quality of assessment items. Methods: The School of Medicine, Sydney, University of Notre Dame is a new medical school having the first graduating cohort in 2011. Clinical year written assessment items are written by affiliate or adjunct clinicians of the School. Targeted question development workshops were conducted in one subschool site (Melbourne) to facilitate and train the clinicians in written question development. At the beginning of each workshop, a pre-workshop questionnaire was distributed to each participant collecting anonymous data on their self-perceived level of competency in developing quality MCQ and short answer questions (SAQ) on a 1-5 Likert scale. A 3-hour workshop then followed consisting of the 1st part: training session on format and characteristics of high quality assessment items, pitfalls in writing questions, ways to improve an item to test higher order thinking and methods to avoid test-wise student’s strategies. 2nd part: draft items from participants were shown to the multi-disciplinary panel for the purpose of further discussion and critique with respect to question quality. This provided the added advantage that each item was able to be modified and improved on at the time, thus providing immediate and ongoing feedback to the question writers. At the conclusion of the workshop, the same questionnaire was re-administered and participants rated their perceived competencies after the exercise. The Wilcoxon rank sum test with paired data was used to determine if there was any improvement in the post workshop competencies. The quality improvement of assessment items was analysed by determining the percentage of questions which needed further modifications at the subsequent review committee. Results: There were a total of 11 clinicians participating in the workshop and a 100% response rate for the questionnaires. There was a highly significant (p=0.005) improvement in the perceived competency in writing both MCQ and SAQ items after the workshop (Table 1). The items developed post workshop also required significantly less modifications when compared to question items developed by non-workshop participants. Conclusions: By delivering training and facilitating multi-disciplinary review and feedback on draft assessment items, the targeted question development workshops could improve the quality of assessment items as well as increasing the level of competency of the clinicians as part of the faculty development program

    Evaluating the Guideline Enhancement Tool (GET): an innovative clinical training tool to enhance the use of hypertension guidelines in general practice

    Get PDF
    Background: This project aims to evaluate the effectiveness of an innovative educational intervention in enhancing clinical decision making related to the management of hypertension in general practice. The relatively low level of uptake of clinical practice guidelines by clinicians is widely recognised as a problem that impacts on clinical outcomes. This project addresses this problem with a focus on hypertension guidelines. Hypertension is the most frequently managed problem in general practice but evidence suggests that management of Hypertension in general practice is sub-optimal. Methods/design: This study will explore the effectiveness of an educational intervention named the ‘Guideline Enhancement Tool (GET)’. The intervention is designed to guide clinicians through a systematic process of considering key decision points related to the management of hypertension and provides a mechanism for clinicians to engage with the hypertension clinical guidelines. The intervention will be administered within the Australian General Practice Training program, via one of the regional training providers. Two cohorts of trainees will participate as the intervention and delayed intervention groups. This process is expected to improve clinicians’ engagement with the hypertension guidelines in particular, and enhance their clinical reasoning abilities in general. The effectiveness of the intervention in improving clinical reasoning will be evaluated using the ‘Script Concordance Test’. Discussion: The study design presented in this protocol aims to achieve two major outcomes. Firstly, the trial and evaluation of the educational intervention can lead to the development of a validated clinical education strategy that can be used in GP training to enhance the decision-making processes related to the management of hypertension. This has the potential to be adapted to other clinical conditions and training programs and can benefit clinicians in their clinical decision-making. Secondly, the study explores features that influence the effective use of clinical practice guidelines. The study thus addresses a significant problem in clinical education

    Using Script Concordance Testing (SCT) as a new modality of assessment for graduate entry medical students – a pilot study.

    Get PDF
    Background: SCT is a new modality of assessment in medical education. By presenting a clinical scenario, additional pieces of information are given and the students are asked to assess whether this information increases or decreases the probability of the diagnosis. This reflects the clinician’s everyday real-world script determination. It tests the clinical reasoning and problem solving ability and how they apply their learned knowledge in the context of uncertainty. Their decision is compared to a panel of clinicians. Studies have shown that the scores correlate with the level of training and predict future performance on clinical reasoning Methods: The School of Medicine, Sydney, University of Notre Dame examined a cohorts of 113 graduate-entry Year 3 medical students using 20 SCT questions in the formative midyear examination. Their answers were compared with the panel of 15 general practitioners (clinical tutors). This pilot study also correlates the medical students’ scores with their Year 2 summative performance. Results: The mean and standard deviation (SD) of the students’ and panel’s scores were 15/20 (SD 2) and 17/20 (SD 2) respectively. The correlation with Year 2 summative written and total scores were 0.44 and 0.38 respectively (p Conclusions: SCT is a new modality of assessment looking at the concordance of clinical thinking and decision making between the students and the panel. The pilot study showed a moderate correlation with other assessment scores. Further studies to investigate the correlation between SCT and clinical reasoning components of the clinical OSCE examination would be very useful

    Using Competency Based marking in the Objective Structured Clinical Examination (OSCE) for clinical year medical students

    No full text
    Background: The aim of the study was to assess the degree of correlation between competency based marking and global scoring in OSCE and to determine the reliability of this assessment. Summary of work: The School examined 110 graduateentry Year 3 medical students. There were 10 stations (total examination time 120 minutes). Competency-based marking was developed to assess the students’ abilities to: demonstrate respect and compassion, elicit systematic history/physical examination, formulate working diagnosis and differential diagnoses, develop and interpret investigations and formulate management plans. A score of ‘0’ (failed), ‘1’ (achieved) or ‘2’ (achieved well) was given for each competency assessed. A separate independent global scoring was also given. The Pearson’s correlation coefficient between the competency based and global scoring marks and the reliability score (Cronbach’s alpha) was calculated. Summary of results: The correlation between the competency based and global scoring marks was highly significant (p Conclusions: The competency based method of assessment is reliable. The examiners gave highly valued feedback on this new marking scheme. Take-home messages: Competency based marking in OSCE is reliable and can be considered to be used in assessing the clinical year medical students

    Improving reliability of the objective structured clinical examination (OSCE) in first year medical students

    No full text
    Objective structured clinical examination (OSCE) can be used for formative and summative assessment in medical education. The reliability score (internal consistency) for 60 minutes of OSCE has been shown to be 0.47 (Petrusa, 2002). The aim of the study was to investigate whether we could improve the reliability of the 60 minutes of OSCE by utilizing various standardisation strategies in implementation of the examination. The School of Medicine, Sydney, University of Notre Dame examined a cohort of 112 graduate-entry medical students (Year 1, 2008) using OSCE. There were eight stations with total examination duration of 64 minutes. In the preparation for the OSCE, we employed the following strategies to improve the reliability: setting up a multidisciplinary panel to standard set all the questions, utilizing a detailed criterion based marking guide for each station, conducting training sessions to ensure consistency in both the examiners and actors and having external standard setters to standardise the marking. The reliability score was calculated using the SPSS statistical package (SPSS Inc., Chicago, Illinois, USA). The overall reliability of the 64 minutes OSCE was 0.69 (Cronbach’s alpha). This was higher than the historical data and comparable to a 120 minutes OSCE (Petrusa, 2002). By employing strategies to standardize the OSCE, we were able to improve the reliability of the OSCE examination. This would be particularly useful in situations where there are constraints on the ability to run longer exam periods. Increasing the number of stations and duration of future OSCE could further improve the reliability

    Comparison of criterion-based checklist scoring and global rating scales for the Objective Structured Clinical Examination (OSCE) in pre- clinical year medical students

    No full text
    Introduction: Different scoring systems have been employed in the marking of OSCE. The aim of the study was to compare the use of the criterion-based checklist scoring with the examiners’ global rating scales in the OSCE. Methods: We examined 2 cohorts of 221 graduate-entry medical students across 2 years (Year 1 in 2008 and Year 1 & 2 in 2009) using OSCE. Each station’s marking guide consisted of a detailed criterion-based checklist of items examined in the particular case. The passing mark for each station was determined using the modified Angoff’s method. A separate global rating scale (Fail, Borderline, Pass, very Satisfactory) was included for the examiner to record the global assessment. The Spearman co-relation co-efficient between the criterion-based marking and the global rating was calculated. Results: The Pearson’s correlations between the criterion-based marking and the global rating for all stations in the 3 OSCE across the 2 years were highly significant (p Conclusion: This revealed that using global rating might be another reliable assessment method in OSCE. Examiner experience in OSCE is an important factor to be considered before full implementation. Limitation of using this global rating scale in OSCE includes difficulty in defending the marking in case of an appeal especially for high-stake exit examinations

    Improving cultural respect to improve Aboriginal health in general practice : a multi-methods and multi-perspective pragmatic study

    No full text
    Background: To address the gap in access to healthcare between Aboriginal people and other Australians, we developed Ways of Thinking, Ways of Doing (WoTWoD) to embed cultural respect into routine clinical practice. WoTWoD includes a workshop, toolkit and cultural mentors in a partnership of general practice and Aboriginal organisations. The aim of this study was to examine the impact of WoTWoD on cultural respect, health checks and risk factor management for Aboriginal patients in general practice. Methods: A multi-methods and multi-perspective pre- and-post-intervention pragmatic study with 10 general practices was undertaken, using information from medical records, practice staff, cultural mentors and patients. Results: Cultural respect, service and clinical measures improved after implementing WoTWoD. Qualitative information confirmed and explained improvements. Knowledge of Aboriginal history needed further improvement. Discussion: The WoTWoD may improve culturally appropriate care in general practice. Further research requires adequately powered randomised controlled trials
    corecore