235 research outputs found

    Multiple tutorial-based assessments: a generalizability study

    Get PDF
    BACKGROUND: Tutorial-based assessment commonly used in problem-based learning (PBL) is thought to provide information about students which is different from that gathered with traditional assessment strategies such as multiple-choice questions or short-answer questions. Although multiple-observations within units in an undergraduate medical education curriculum foster more reliable scores, that evaluation design is not always practically feasible. Thus, this study investigated the overall reliability of a tutorial-based program of assessment, namely the Tutotest-Lite. METHODS: More specifically, scores from multiple units were used to profile clinical domains for the first two years of a system-based PBL curriculum. RESULTS: G-Study analysis revealed an acceptable level of generalizability, with g-coefficients of 0.84 and 0.83 for Years 1 and 2, respectively. Interestingly, D-Studies suggested that as few as five observations over one year would yield sufficiently reliable scores. CONCLUSIONS: Overall, the results from this study support the use of the Tutotest-Lite to judge clinical domains over different PBL units

    Graduates of different UK medical schools show substantial differences in performance on MRCP(UK) Part 1, Part 2 and PACES examinations

    Get PDF
    Background: The UK General Medical Council has emphasized the lack of evidence on whether graduates from different UK medical schools perform differently in their clinical careers. Here we assess the performance of UK graduates who have taken MRCP( UK) Part 1 and Part 2, which are multiple-choice assessments, and PACES, an assessment using real and simulated patients of clinical examination skills and communication skills, and we explore the reasons for the differences between medical schools. Method: We perform a retrospective analysis of the performance of 5827 doctors graduating in UK medical schools taking the Part 1, Part 2 or PACES for the first time between 2003/2 and 2005/3, and 22453 candidates taking Part 1 from 1989/1 to 2005/3. Results: Graduates of UK medical schools performed differently in the MRCP( UK) examination between 2003/2 and 2005/3. Part 1 and 2 performance of Oxford, Cambridge and Newcastle-upon-Tyne graduates was significantly better than average, and the performance of Liverpool, Dundee, Belfast and Aberdeen graduates was significantly worse than average. In the PACES ( clinical) examination, Oxford graduates performed significantly above average, and Dundee, Liverpool and London graduates significantly below average. About 60% of medical school variance was explained by differences in pre-admission qualifications, although the remaining variance was still significant, with graduates from Leicester, Oxford, Birmingham, Newcastle-upon-Tyne and London overperforming at Part 1, and graduates from Southampton, Dundee, Aberdeen, Liverpool and Belfast underperforming relative to pre-admission qualifications. The ranking of schools at Part 1 in 2003/2 to 2005/3 correlated 0.723, 0.654, 0.618 and 0.493 with performance in 1999-2001, 1996-1998, 1993-1995 and 1989-1992, respectively. Conclusion: Candidates from different UK medical schools perform differently in all three parts of the MRCP( UK) examination, with the ordering consistent across the parts of the exam and with the differences in Part 1 performance being consistent from 1989 to 2005. Although pre-admission qualifications explained some of the medical school variance, the remaining differences do not seem to result from career preference or other selection biases, and are presumed to result from unmeasured differences in ability at entry to the medical school or to differences between medical schools in teaching focus, content and approaches. Exploration of causal mechanisms would be enhanced by results from a national medical qualifying examination

    Informed consent for MRI and fMRI research: Analysis of a sample of Canadian consent documents

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Research ethics and the measures deployed to ensure ethical oversight of research (e.g., informed consent forms, ethics review) are vested with extremely important ethical and practical goals. Accordingly, these measures need to function effectively in real-world research and to follow high level standards.</p> <p>Methods</p> <p>We examined approved consent forms for Magnetic Resonance Imaging (MRI) and functional Magnetic Resonance Imaging (fMRI) studies approved by Canadian research ethics boards (REBs).</p> <p>Results</p> <p>We found evidence of variability in consent forms in matters of physical and psychological risk reporting. Approaches used to tackle the emerging issue of incidental findings exposed extensive variability between and within research sites.</p> <p>Conclusion</p> <p>The causes of variability in approved consent forms and studies need to be better understood. However, mounting evidence of administrative and practical hurdles within current ethics governance systems combined with potential sub-optimal provision of information to and protection of research subjects support other calls for more scrutiny of research ethics practices and applicable revisions.</p

    Online clinical reasoning assessment with the Script Concordance test: a feasibility study

    Get PDF
    BACKGROUND: The script concordance (SC) test is an assessment tool that measures capacity to solve ill-defined problems, that is, reasoning in context of uncertainty. This tool has been used up to now mainly in medicine. The purpose of this pilot study is to assess the feasibility of the test delivered on the Web to French urologists. METHODS: The principle of SC test construction and the development of the Web site are described. A secure Web site was created with two sequential modules: (a) The first one for the reference panel (n = 26) with two sub-tasks: to validate the content of the test and to elaborate the scoring system; (b) The second for candidates with different levels of experience in Urology: Board certified urologists, residents, medical students (5 or 6(th )year). Minimum expected number of participants is 150 for urologists, 100 for residents and 50 for medical students. Each candidate is provided with an individual access code to this Web site. He/she may complete the Script Concordance test several times during his/her curriculum. RESULTS: The Web site has been operational since April 2004. The reference panel validated the test in June of the same year during the annual seminar of the French Society of Urology. The Web site is available for the candidates since September 2004. In six months, 80% of the target figure for the urologists, 68% of the target figure for the residents and 20% of the target figure for the student passed the test online. During these six months, no technical problem was encountered. CONCLUSION: The feasibility of the web-based SC test is successful as two-thirds of the expected number of participants was included within six months. Psychometric properties (validity, reliability) of the test will be evaluated on a large scale (N = 300). If positive, educational impact of this assessment tool will be useful to help urologists during their curriculum for the acquisition of clinical reasoning skills, which is crucial for professional competence

    On line clinical reasoning assessment with Script Concordance test in urology: results of a French pilot study

    Get PDF
    BACKGROUND: The Script Concordance test (SC) test is an assessment tool that measures the capacity to solve ill-defined problems, that is, reasoning in a context of uncertainty. This study assesses the feasibility, reliability and validity of the SC test made available on the Web to French urologists. METHODS: A 97 items SC test was developed based on major educational objectives of French urology training programmes. A secure Web site was created with two sequential modules: a) The first one for the reference panel to elaborate the scoring system; b) The second for candidates with different levels of experience in urology: Board certified urologists, chief-residents, residents, medical students. All participants were recruited on a voluntary basis. Statistical analysis included descriptive statistics of the participants' scores and factorial analysis of variance (ANOVA) to study differences between groups' means. Reliability was evaluated with Cronbach's alpha coefficient. RESULTS: The on line SC test has been operational since June 2004. Twenty-six faculty members constituted the reference panel. During the following 10 months, 207 participants took the test online (124 urologists, 29 chief-residents, 38 residents, 16 students). No technical problem was encountered. Forty-five percent of the participants completed the test partially only. Differences between the means scores for the 4 groups were statistically significant (P = 0.0123). The Bonferroni post-hoc correction indicated that significant differences were present between students and chief-residents, between students and urologists. There were no differences between chief-residents and urologists. Reliability coefficient was 0.734 for the total group of participants. CONCLUSION: Feasibility of Web-based SC test was proved successful by the large number of participants who participated in a few months. This Web site has permitted to quickly confirm reliability of the SC test and develop strategy to improve construct validity of the test when applied in the field of urology. Nevertheless, optimisation of the SC test content, with a smaller number of items will be necessary. Virtual medical education initiative such as this SC test delivered on the Internet warrants consideration in the current context of national pre-residency certification examination in France

    Epidemiologic studies of modifiable factors associated with cognition and dementia: systematic review and meta-analysis

    Full text link
    corecore