15 research outputs found

    Learning physical examination skills outside timetabled training sessions: what happens and why?

    Get PDF
    Lack of published studies on students’ practice behaviour of physical examination skills outside timetabled training sessions inspired this study into what activities medical students undertake to improve their skills and factors influencing this. Six focus groups of a total of 52 students from Years 1–3 using a pre-established interview guide. Interviews were recorded, transcribed and analyzed using qualitative methods. The interview guide was based on questionnaire results; overall response rate for Years 1–3 was 90% (n = 875). Students report a variety of activities to improve their physical examination skills. On average, students devote 20% of self-study time to skill training with Year 1 students practising significantly more than Year 3 students. Practice patterns shift from just-in-time learning to a longitudinal selfdirected approach. Factors influencing this change are assessment methods and simulated/real patients. Learning resources used include textbooks, examination guidelines, scientific articles, the Internet, videos/DVDs and scoring forms from previous OSCEs. Practising skills on fellow students happens at university rooms or at home. Also family and friends were mentioned to help. Simulated/real patients stimulated students to practise of physical examination skills, initially causing confusion and anxiety about skill performance but leading to increased feelings of competence. Difficult or enjoyable skills stimulate students to practise. The strategies students adopt to master physical examination skills outside timetabled training sessions are self-directed. OSCE assessment does have influence, but learning takes place also when there is no upcoming assessment. Simulated and real patients provide strong incentives to work on skills. Early patient contacts make students feel more prepared for clinical practice

    The need for national licensing examinations

    No full text

    Panel expertise for an Angoff standard setting procedure in progress testing: item writers compared to recently graduated students

    No full text
    Introduction An earlier study showed that an Angoff procedure with greater than or equal to 10 recently graduated students as judges can be used to estimate the passing score of a progress test. As the acceptability and feasibility of this approach are questionable, we conducted an Angoff procedure with test item writers as judges. This paper reports on thereliability and credibility of this procedure and compares the standards set by the two different panels. Methods Fourteen item writers judged 146 test items. Recently graduated students had assessed these items in a previous study. Generalizability was investigated as a function of the number of items and judges. Credibility was judged by comparing the pass/fail rates associated with the Angoff standard, a relative standard and a fixed standard. The Angoff standards obtained by item writers and graduates were compared. Results The variance associated with consistent variability of item writers across items was 1.5% and for graduate students it was 0.4%. An acceptable error score required 39 judges. Item-Angoff estimates of the two panels and item P -values correlated highly. Failure rates of 57%, 55% and 7% were associated with the item writers' standard, the fixed standard and the graduates' standard, respectively. Conclusion The graduates' and the item writers' standards differed substantially, as did the associated failure rates. A panel of 39 item writers is not feasible. The item writers' passing score appears to be less credible. The credibility of the graduates' standard needs further evaluation. The acceptability and feasibility of a panel consisting of both students and item writers may be worth investigating

    Cross institutional collaboration in assessment:a case on progress testing

    No full text
    The practice of assessment is governed by an interesting paradox. On the one hand good assessment requires substantial resources which may exceed the capacity of a single institution and we have reason to doubt the quality of our in-house examinations. On the other hand, our parsimonity with regard to our resources makes us reluctant to pool efforts and share our test material. This paper reports on an initiative to share test material across different medical schools. Three medical schools in The Netherlands have successfully set up a partnership for a specific testing method: progress testing. At present, these three schools collaboratively produce high-quality test items. The jointly produced progress tests are administered concurrently by these three schools and one other school, which buys the test. The steps taken in establishing this partnership are described and results are presented to illustrate the unique sort of information that is obtained by cross-institutional assessment. In addition, plans to improve test content and procedure and to expand the partnership are outlined. Eventually, the collaboration may even extend to other test formats. This article is intended to give evidence of the feasibility and exciting potential of between school collaboration in test development and test administration. Our experiences have demonstrated that such collaboration has excellent potential to combine economic benefit with educational advantages, which exceed what is achievable by individual schools
    corecore