450 research outputs found

    Assessment, surgeon, and society

    Get PDF
    An increasing public demand to monitor and assure the quality of care provided by physicians and surgeons has been accompanied by a deepening appreciation within the profession of the demands of self-regulation and the need for accountability. To respond to these developments, the public and the profession have turned increasingly to assessment, both to establish initial competence and to ensure that it is maintained throughout a career. Fortunately, this comes at a time when there have been significant advances in the breadth and quality of the assessment tools available. This article provides an overview of the drivers of change in assessment which includes the educational outcomes movement, the development of technology, and advances in assessment. It then outlines the factors that are important in selecting assessment devices as well as a system for classifying the methods that are available. Finally, the drivers of change have spawned a number of trends in the assessment of competence as a surgeon. Three of them are of particular note, simulation, workplace-based assessment, and the assessment of new competences, and each is reviewed with a focus on its potential

    Peer assessment of competence

    Get PDF
    Objective This instalment in the series on professional assessment summarises how peers are used in the evaluation process and whether their judgements are reliable and valid. Method The nature of the judgements peers can make, the aspects of competence they can assess and the factors limiting the quality of the results are described with reference to the literature. The steps in implementation are also provided. Results Peers are asked to make judgements about structured tasks or to provide their global impressions of colleagues. Judgements are gathered on whether certain actions were performed, the quality of those actions and ⁄ or their suitability for a particular purpose. Peers are used to assess virtually all aspects of professional competence, including technical and nontechnical aspects of proficiency. Factors influencing the quality of those assessments are reliability, relationships, stakes and equivalence. Conclusion Given the broad range of ways peer evaluators can be used and the sizeable number of competencies they can be asked to judge, generalisations are difficult to derive and this form of assessment can be good or bad depending on how it is carried out

    Virtual patients in health professions education

    Get PDF
    La práctica clínica de hoy con hospitalizaciones cortas, riesgos de responsabilidad legal, exigentes normas de acreditación y la menor disponibilidad de docentes, han llevado a la búsqueda de nuevas opciones para la formación de los estudiantes en las profesiones de la salud. Los pacientes virtuales (PV) son programas de computador que simulan a un paciente real y están diseñados para la formación y evaluación del razonamiento clínico. Ofrecen un medio seguro para el aprendizaje de nuestros estudiantes. Se utilizan en evaluación sumativa y se requiere de 6-9 casos para que la evaluación tenga validez. El diseño del PV determina el tipo de evaluación que estamos realizando. Los pacientes electrónicos virtuales son complejos de diseñar lo que conlleva a un alto costo para su desarrollo. A pesar de lo anterior es una herramienta ideal para utilizarse en países en desarrollo y su diseño se ajusta a las necesidades particulares de cada país o institución. Presentamos dos experiencias exitosas en la aplicación de esta tecnología. Otra opción es repotenciar un PV que ya está terminado, haciendo los ajustes necesarios para la nueva situación donde se va a aplicar. Describimos el proceso que es más rápido y menos dispendioso que construir un PV de novo. Existen un banco de la Comunidad Europea, eVIP, donde se encuentran aproximadamente 340 casos que se pueden repotenciar de acuerdo a las necesidades de la institución o país. En resumen, se trata de una excelente herramienta para promover razonamiento clínico accesible a todos.201-209Hospital practice today is characterized by hospitalizations, legal liabilities, strict norms of accreditation, and faculty at short hand. This situation has led to the search of new options for the education of our students in health professions. Electronic Virtual Patients (VP) are computer programs that simulate a real patient, which are designed for the formation and assessment of clinical reasoning and knowledge, depending on their design. They offer a secure scenario for the education of our students. They can be used for assessment but they require 6-9 cases to have content validity. The VP design determines the kind of evaluation to be applied. VPs are very complex to design and they entail a high development cost. Regardless of this situation, they are an ideal tool to be used in developing countries, responding to the socioeconomic situation of the institution or country. We present two successful experiences in two different continents. Another option is to refurbish the existing VP to local needs. We describe the process, which is faster and more efficient than building the case again. In the European Union we find the eVIP program, which has a repertory on its website with 340 cases that can be refurbished by anyone according to the institution or local requirements. In summary, VPs are an excellent resource for clinical reasoning, available to everybody

    Assessing clinical reasoning skills using Script Concordance Test (SCT) and extended matching questions (EMQs): A pilot for urology trainees

    Get PDF
    Introduction: Clinical reasoning skill is the core of medical competence. Commonly used assessment methods for medical competence have limited ability to evaluate critical thinking and reasoning skills. Script Concordance Test (SCT) and Extended Matching Questions(EMQs) are the evolving tests which are considered to be valid and reliable tools for assessing clinical reasoning and judgment. We performed this pilot study to determine whether SCT and EMQs can differentiate clinical reasoning ability among urology residents, interns and medical students.Methods: This was a cross-sectional study in which an examination with 48 SCT-based items on eleven clinical scenarios and four themed EMQs with 21 items were administered to a total of 27 learners at three differing levels of experience i.e. 9 urology residents, 6 interns and 12 fifth year medical students. A non-probability convenience sampling was done. The SCTs and EMQs were developed from clinical situations representative of urological practice by 5 content experts (urologists) and assessed by a medical education expert. Learners\u27 responses were scored using the standard and the graduated key. A one way analysis of variance (ANOVA) was conducted to compare the mean scores across the level of experience. A p-value of \u3c 0.05 was considered statistically significant. Test reliability was estimated by Cronbach α. A focused group discussion with candidates was done to assess their perception of test.Results: Both SCT and EMQs successfully differentiated residents from interns and students. Statistically significant difference in mean score was found for both SCT and EMQs among the 3 groups using both the standard and the graduated key. The mean scores were higher for all groups as measured by the graduated key compared to the standard key. The internal consistency (Cronbach\u27s α) was 0.53 and 0.6 for EMQs and SCT, respectively. Majority of the participants were satisfied with regard to time, environment, instructions provided and the content covered and nearly all felt that the test helped them in thinking process particularly clinical reasoning.Conclusions: Our data suggest that both SCT and EMQs are capable of discriminating between learners according to their clinicalexperience in urology. As there is a wide acceptability by all candidates, these tests could be used to assess and enhance clinical reasoningskills. More research is needed to prove validity of these tests

    Changes in standard of candidates taking the MRCP(UK) Part 1 examination, 1985 to 2002: Analysis of marker questions

    Get PDF
    The maintenance of standards is a problem for postgraduate medical examinations, particularly if they use norm-referencing as the sole method of standard setting. In each of its diets, the MRCP(UK) Part 1 Examination includes a number of marker questions, which are unchanged from their use in a previous diet. This paper describes two complementary studies of marker questions for 52 diets of the MRCP(UK) Part 1 Examination over the years 1985 to 2001 to assess whether standards have changed

    Standard setting: Comparison of two methods

    Get PDF
    BACKGROUND: The outcome of assessments is determined by the standard-setting method used. There is a wide range of standard – setting methods and the two used most extensively in undergraduate medical education in the UK are the norm-reference and the criterion-reference methods. The aims of the study were to compare these two standard-setting methods for a multiple-choice question examination and to estimate the test-retest and inter-rater reliability of the modified Angoff method. METHODS: The norm – reference method of standard -setting (mean minus 1 SD) was applied to the 'raw' scores of 78 4th-year medical students on a multiple-choice examination (MCQ). Two panels of raters also set the standard using the modified Angoff method for the same multiple-choice question paper on two occasions (6 months apart). We compared the pass/fail rates derived from the norm reference and the Angoff methods and also assessed the test-retest and inter-rater reliability of the modified Angoff method. RESULTS: The pass rate with the norm-reference method was 85% (66/78) and that by the Angoff method was 100% (78 out of 78). The percentage agreement between Angoff method and norm-reference was 78% (95% CI 69% – 87%). The modified Angoff method had an inter-rater reliability of 0.81 – 0.82 and a test-retest reliability of 0.59–0.74. CONCLUSION: There were significant differences in the outcomes of these two standard-setting methods, as shown by the difference in the proportion of candidates that passed and failed the assessment. The modified Angoff method was found to have good inter-rater reliability and moderate test-retest reliability

    The reliability of in-training assessment when performance improvement is taken into account

    Get PDF
    During in-training assessment students are frequently assessed over a longer period of time and therefore it can be expected that their performance will improve. We studied whether there really is a measurable performance improvement when students are assessed over an extended period of time and how this improvement affects the reliability of the overall judgement. In-training assessment results were obtained from 104 students on rotation at our university hospital or at one of the six affiliated hospitals. Generalisability theory was used in combination with multilevel analysis to obtain reliability coefficients and to estimate the number of assessments needed for reliable overall judgement, both including and excluding performance improvement. Students’ clinical performance ratings improved significantly from a mean of 7.6 at the start to a mean of 7.8 at the end of their clerkship. When taking performance improvement into account, reliability coefficients were higher. The number of assessments needed to achieve a reliability of 0.80 or higher decreased from 17 to 11. Therefore, when studying reliability of in-training assessment, performance improvement should be considered

    Interprofessional communication in a sociohierarchical culture: development of the TRI-O guide

    Get PDF
    Objectives: Interprofessional education (IPE) and collaborative practice are essential for patient safety. Effective teamwork starting with partnership-based communications should be introduced early in the educational process. Many societies in the world hold socio-hierarchical culture with a wide power distance, which makes collaboration among health professionals challenging. Since an appropriate communication framework for this context is not yet available, this study filled that gap by developing a guide for interprofessional communication, which is best suited to the socio-hierarchical and socio-cultural contexts. Materials and methods: The draft of the guide was constructed based on previous studies of communication in health care in a socio-hierarchical context, referred to international IPE literature, and refined by focus group discussions among various health professionals. Nominal group technique, also comments from national and international experts of communication skills in health care, was used to validate the guide. A pilot study with a pre–posttest design was conducted with 53 first- and 107 fourth-year undergraduate medical, nursing, and health nutrition students. Results: We developed the “TRI-O” guide of interprofessional communication skills, emphasizing “open for collaboration, open for information, open for discussion”, and found that the application of the guide during training was feasible and positively influenced students’ perceptions. Conclusion: The findings suggest that the TRI-O guide is beneficial to help students initiate partnership-based communication and mutual collaboration among health professionals in the socio-hierarchical and socio-cultural context
    corecore