2 research outputs found
Effects of observer on the diagnostic accuracy of low-field MR imaging for detecting canine meniscal tears
Low-field MRI (lfMRI) has become increasingly accepted as a method for diagnosing canine meniscal tears in clinical practice. However, observer effects on diagnostic accuracy have not been previously reported. In this study, 50 consecutive stifle joints with clinical and radiologic evidence of cranial cruciate ligament insufficiency were investigated by lfMRI and arthroscopy. Fifteen observers who had varying levels of experience and who were unaware of arthroscopic findings independently reviewed lfMRI studies and recorded whether lateral and medial meniscal tears were present. Diagnostic accuracy (sensitivity, specificity, positive (PPV) and negative predictive value (NPV)) was determined for each observer and median values were calculated for all observers, using arthroscopy as the reference standard. Interrater agreement was determined based on intraclass correlation coefficient (ICC) analysis. Observer level of experience was compared with diagnostic sensitivity and specificity using correlation analysis. Based on pooled data for all observers, median sensitivity, specificity, PPV, and NPV for lfMRI diagnosis of lateral meniscal tears were 0.00, 0.94, 0.05, and 0.94, respectively. Median sensitivity, specificity, PPV, and NPV for medial meniscal tears were 0.74, 0.89, 0.83, and 0.79, respectively. Interrater agreement for all menisci was fair (0.51). Menisci were less consistently scored as having no tears (ICC = 0.13) than those scored as having tears (ICC = 0.50). No significant correlations between observer experience and diagnostic sensitivity/specificity were identified. Findings indicated that the accuracy of lfMRI for diagnosing canine meniscal tears was poor to fair and observer-dependent. Future studies are needed to develop standardized and widely accepted lfMRI criteria for diagnosing meniscal tears
Programmatic assessment of competency-based workplace learning: when theory meets practice
Contains fulltext :
125808.pdf (publisher's version ) (Open Access)BACKGROUND: In competency-based medical education emphasis has shifted towards outcomes, capabilities, and learner-centeredness. Together with a focus on sustained evidence of professional competence this calls for new methods of teaching and assessment. Recently, medical educators advocated the use of a holistic, programmatic approach towards assessment. Besides maximum facilitation of learning it should improve the validity and reliability of measurements and documentation of competence development. We explored how, in a competency-based curriculum, current theories on programmatic assessment interacted with educational practice. METHODS: In a development study including evaluation, we investigated the implementation of a theory-based programme of assessment. Between April 2011 and May 2012 quantitative evaluation data were collected and used to guide group interviews that explored the experiences of students and clinical supervisors with the assessment programme. We coded the transcripts and emerging topics were organised into a list of lessons learned. RESULTS: The programme mainly focuses on the integration of learning and assessment by motivating and supporting students to seek and accumulate feedback. The assessment instruments were aligned to cover predefined competencies to enable aggregation of information in a structured and meaningful way. Assessments that were designed as formative learning experiences were increasingly perceived as summative by students. Peer feedback was experienced as a valuable method for formative feedback. Social interaction and external guidance seemed to be of crucial importance to scaffold self-directed learning. Aggregating data from individual assessments into a holistic portfolio judgement required expertise and extensive training and supervision of judges. CONCLUSIONS: A programme of assessment with low-stakes assessments providing simultaneously formative feedback and input for summative decisions proved not easy to implement. Careful preparation and guidance of the implementation process was crucial. Assessment for learning requires meaningful feedback with each assessment. Special attention should be paid to the quality of feedback at individual assessment moments. Comprehensive attention for faculty development and training for students is essential for the successful implementation of an assessment programme