29 research outputs found

    Influence of open- and closed-book tests on medical students' learning approaches

    Get PDF
    CONTEXT Two learning approaches are consistently distinguished in the literature: deep and surface learning. The deep learning approach is considered preferable. Open-book tests are expected to stimulate deep learning and to offer a possible way of handling the substantial growth in medical knowledge. In this study we test the hypothesis that open-book tests stimulate deep learning more than closed-book tests. METHODS Medical students in Years 2 (n = 423) and 3 (n = 306) participated in this study. They evaluated their preparation for open- and closed-book tests using the test for Deep Information Processing (DIP). This questionnaire consists of 24 items divided into three subscales: Critical Reading; Broaden One's Context, and Structuring. A paired t-test was used to analyse the data. RESULTS Both cohorts scored significantly higher when preparing for closed-book tests for the overall DIP score and on the Broaden One's Context and Structuring scales. Year 3 students also scored significantly higher on the Critical Reading scale when preparing for closed-book tests. Gender differences were found: women used deeper learning approaches than men. CONCLUSIONS Our hypothesis was not supported. In fact, the opposite was found: closed-book tests stimulated a deep learning approach more than open-book tests. Three possible explanations are: deep learning is particularly necessary for remembering and recalling knowledge; students feel more confident when preparing for closed-book tests, and students are more motivated to study for closed-book tests. The debate on the concept of deep learning in higher education should probably be renewed

    Influences of deep learning, need for cognition and preparation time on open- and closed-book test performance

    Get PDF
    Objectives The ability to master discipline-specific knowledge is one of the competencies medical students must acquire. In this context, 'mastering' means being able to recall and apply knowledge. A way to assess this competency is to use both open- and closed-book tests. Student performance on both tests can be influenced by the way the student processes information. Deep information processing is expected to influence performance positively. The personal preferences of students in relation to how they process information in general (i.e. their level of need for cognition) may also be of importance. In this study, we examined the inter-relatedness of deep learning, need for cognition and preparation time, and scores on open- and closed-book tests. Methods This study was conducted at the University Medical Centre Groningen. Participants were Year 2 students (n = 423). They were asked to complete a questionnaire on deep information processing, a scale for need for cognition on a questionnaire on intellectualism and, additionally, to write down the time they spent on test preparation. We related these measures to the students' scores on two tests, both consisting of open- and closed-book components and used structural equation modelling to analyse the data. Results Both questionnaires were completed by 239 students (57%). The results showed that need for cognition positively influenced both open- and closed-book test scores (beta-coefficients 0.05 and 0.11, respectively). Furthermore, study outcomes measured by open-book tests predicted closed-book test results better than the other way around (beta-coefficients 0.72 and 0.11, respectively). Conclusions Students with a high need for cognition performed better on open- as well as closed-book tests. Deep learning did not influence their performance. Adding open-book tests to the regularly used closed-book tests seems to improve the recall of knowledge that has to be known by heart. Need for cognition may provide a valuable addition to existing theories on learning

    Directing students to profound open-book test preparation: the relationship between deep learning and open-book test time

    No full text
    BACKGROUND: Considering the growing amount of medical knowledge and the focus of medical education on acquiring competences, using open-book tests seems inevitable. A possible disadvantage of these tests is that students underestimate test preparation. AIMS: We examined whether students who used a deep learning approach needed less open-book test time, and how students performed on open-book questions asked in a closed-book setting. METHOD: Second- (N = 491) and third-year students (N = 325) prepared half of the subject matter to be tested closed-book and half to be tested open-book. In agreement with the Board of Examiners, some questions in the closed-book test concerned open-book subject matter, and vice versa. Data were gathered about test time, deep learning and preparation time. Repeated measurement analysis, t-tests and partial correlations were used to analyse the data. RESULTS: We found a negative relationship between deep learning and open-book test time for second-year students. Students scored the lowest on closed-book questions about open-book subject matter. CONCLUSIONS: Reduction of the available test time might force students to prepare longer and deeper for open-book tests. Further research is needed to identify variables that influence open-book test time and to determine how restrictive this time should be

    Welke persoonlijke hulpbronnen zeggen honoursstudenten te gebruiken om succes te bereiken?

    No full text
    Poster tijdens de Onderwijs Research Dagen (juni 2017

    Necessary steps in factor analysis: Enhancing validation studies of educational instruments. The PHEEM applied to clerks as an example

    Get PDF
    Background: The validation of educational instruments, in particular the employment of factor analysis, can be improved in many instances. Aims: To demonstrate the superiority of a sophisticated method of factor analysis, implying an integration of recommendations described in the factor analysis literature, over often employed limited applications of factor analysis. We demonstrate the essential steps, focusing on the Postgraduate Hospital Educational Environment Measure (PHEEM). Method: The PHEEM was completed by 279 clerks. We performed Principal Component Analysis (PCA) with varimax rotation. A combination of three psychometric criteria was applied: scree plot, eigenvalues > 1.5 and a minimum percentage of additionally explained variance of approximately 5%. Furthermore, four interpretability criteria were used. Confirmatory factor analysis was performed to verify the original scale structure. Results: Our method yielded three interpretable and practically useful dimensions: learning content and coaching, beneficial affective climate and external regulation. Additionally, combining several criteria reduced the risk of overfactoring and underfactoring. Furthermore, the resulting dimensions corresponded with three learning functions essential to high-quality learning, thus strengthening our findings. Confirmatory factor analysis disproved the original scale structure. Conclusions: Our sophisticated approach yielded several advantages over methods applied in previous validation studies. Therefore, we recommend this method in validation studies to achieve best practice
    corecore