17 research outputs found

    The relevance of neuroscientific research for understanding clinical reasoning

    Get PDF

    Team-Based Learning Analytics: An Empirical Case Study

    Get PDF
    Many medical schools that have implemented team-based learning (TBL) have also incorporated an electronic learning architecture, commonly referred to as a learning management system (LMS), to support the instructional process. However, one LMS feature that is often overlooked is the LMS's ability to record data that can be used for further analysis. In this article, the authors present a case study illustrating how one medical school used data that are routinely collected via the school's LMS to make informed decisions. The case study started with one instructor's observation that some teams in one of the undergraduate medical education learning modules appeared to be struggling during one of the team activities; that is, some teams seemed unable to explain or justify their responses to items on the team readiness assurance test (tRAT). Following this observation, the authors conducted 4 analyses. Their analyses demonstrate how LMS-generated and recorded data can be used in a systematic manner to investigate issues in the real educational environment. The first analysis identified a team that performed significantly poorer on the tRAT. A subsequent analysis investigated whether the weaker team's poorer performance was consistent over a whole module. Findings revealed that the weaker team performed poorer on the majority of the TBL sessions. Further investigation using LMS data showed that the weaker performance was due to the lack of preparation of one individual team member (rather than a collective poor tRAT performance). Using the findings obtained from this case study, the authors hope to convey how LMS data are powerful and may form the basis of evidence-based educational decision making

    COVID 19 : designing and conducting an online mini-multiple interview (MMI) in a dynamic landscape

    No full text
    Introduction: The COVID-19 pandemic presented numerous, significant challenges for medical schools, including how to select the best candidates from a pool of applicants when social distancing and other measures prevented "business as usual" admissions processes. However, selection into medical school is the gateway to medicine in many countries, and it is critical to use processes which are evidence-based, valid and reliable even under challenging circumstances. Our challenge was to plan and conduct a multiple-mini interview (MMI) in a dynamic and stringent safe distancing context.Methods: This paper reports a case study of how to plan, re-plan and conduct MMIs in an environment where substantially tighter safe distancing measures were introduced just before the MMI was due to be delivered.Results: We report on how to design and implement a fully remote, online MMI which ensured the safety of candidates and assessors.Discussion: We discuss the challenges of this approach and also reflect on broader issues associated with selection into medical school during a pandemic. The aim of the paper is to provide broadly generalizable guidance to other medical schools faced with the challenge of selecting future students under difficult conditions

    A students’ model of team-based learning

    No full text
    Background: Team-based learning (TBL) combines direct instruction with active, collaborative small group learning. This study aimed to elucidate-from the students’ perspective-the relations between different elements of TBL. This is expected to provide a better understanding of the inner workings of TBL in education. Method: Three hundred and thirteen first- and second-year medical students participated in the study. Data about TBL were collected at the end of six teaching blocks, by means of a questionnaire. The data were then combined and subjected to path analysis, which enabled testing of hypothesised relations between three layers of TBL-relevant variables. These were (1) input variables: prior knowledge, teamwork, challenging application exercise, content expert and facilitator; (2) process variables: preparation materials, individual readiness assurance test (iRAT), team readiness assurance test (tRAT); and (3) output variables: learning and topic interest. Results: Initial analysis resulted in amendments to the hypothesised model. An amended model fitted the data well and explained 43% of the variance in learning and 32% of the variance in topic interest. Content expert had a direct effect on topic interest, as did prior knowledge, teamwork, iRAT and application exercise. Learning was directly influenced by tRAT, application exercise and facilitator, but not content expert. Conclusions: The results of this study demonstrate the inter-relationships of different elements of TBL. The results provide new insights in how TBL works from a students’ perspective. Implications of these findings are discussed.Published versio

    Implementation of team-based learning on a large scale : three factors to keep in mind*

    No full text
    Team-based learning (TBL) is a structured form of small group learning that can be scaled up for delivery in large classes. The principles of successful TBL implementation are well established. TBL has become widely practiced in medical schools, but its use is typically limited to certain courses or parts of courses. Implementing TBL on a large scale, across different courses and disciplines, is the next logical step. The Lee Kong Chian School of Medicine (LKCMedicine), a partnership between Nanyang Technological University, Singapore and Imperial College London, admitted its first students in 2013. This new undergraduate medical program, developed collaboratively by faculty at both institutions, uses TBL as its main learning and teaching strategy, replacing all face-to-face lectures. TBL accounts for over 60% of the curriculum in the first two years, and there is continued learning through TBL during campus teaching in the remaining years. This paper describes our experience of rolling out TBL across all years of the medical curriculum, focusing on three success factors: (1) “team-centric” learning spaces, to foster active, collaborative learning; (2) an e-learning ecosystem, seamlessly integrated to support all phases of the TBL process and (3) teaching teams in which experts in pedagogical process (TBL Facilitators) co-teach with experts in subject matter (Content Experts).Published versio

    Health professions digital education on clinical practice guidelines: a systematic review by Digital Health Education collaboration

    No full text
    Background:Clinical practice guidelines are an important source of information, designed to help clinicians integrate research evidence into their clinical practice. Digital education is increasingly used for clinical practice guideline dissemination and adoption. Our aim was to evaluate the effectiveness of digital education in improving the adoption of clinical practice guidelines. Methods:We performed a systematic review and searched seven electronic databases from January 1990 to September 2018. Two reviewers independently screened studies, extracted data and assessed risk of bias. We included studies in any language evaluating the effectiveness of digital education on clinical practice guidelines compared to other forms of education or no intervention in healthcare professionals. We used the Grading of Recommendations, Assessment, Development and Evaluations (GRADE) approach to assess the quality of the body of evidence. Results:Seventeen trials involving 2382 participants were included. The included studies were diverse with a largely unclear or high risk of bias. They mostly focused on physicians, evaluated computer-based interventions with limited interactivity and measured participants’ knowledge and behaviour. With regard to knowledge, studies comparing the effect of digital education with no intervention showed a moderate, statistically significant difference in favour of digital education intervention (SMD = 0.85, 95% CI 0.16, 1.54; I2 = 83%, n = 3, moderate quality of evidence). Studies comparing the effect of digital education with traditional learning on knowledge showed a small, statistically non-significant difference in favour of digital education (SMD = 0.23, 95% CI − 0.12, 0.59; I2 = 34%, n = 3, moderate quality of evidence). Three studies measured participants’ skills and reported mixed results. Of four studies measuring satisfaction, three studies favoured digital education over traditional learning. Of nine studies evaluating healthcare professionals’ behaviour change, only one study comparing email-delivered, spaced education intervention to no intervention reported improvement in the intervention group. Of three studies reporting patient outcomes, only one study comparing email-delivered, spaced education games to non-interactive online resources reported modest improvement in the intervention group. The quality of evidence for outcomes other than knowledge was mostly judged as low due to risk of bias, imprecision and/or inconsistency. Conclusions:Health professions digital education on clinical practice guidelines is at least as effective as traditional learning and more effective than no intervention in terms of knowledge. Most studies report little or no difference in healthcare professionals’ behaviours and patient outcomes. The only intervention shown to improve healthcare professionals’ behaviour and modestly patient outcomes was email-delivered, spaced education. Future research should evaluate interactive, simulation-based and spaced forms of digital education and report on outcomes such as skills, behaviour, patient outcomes and cost.Published versio

    Effects of graded versus ungraded individual readiness assurance scores in team-based learning : a quasi-experimental study

    No full text
    Pre-class preparation is a crucial component of team-based learning (TBL). Lack of preparation hinders both individual learning and team performance during TBL. The purpose of the present study was to explore how the grading of the individual readiness assurance test (iRAT) can affect pre-class preparation, iRAT performance and performance in the end-of-year examination. Using a quasi-experimental design, Year 1 and 2 students' download frequency for their pre-class materials, performance on iRAT and examination were examined under two conditions; (1) under which the iRAT was graded and (2) under which the iRAT was ungraded. Medical students (N = 220) from three cohorts were included in the study. Differences between both conditions were tested by means of six separate ANCOVAs, using medical school entry test scores as the covariate to account for potential cohort effects. Results revealed that students were downloading more pre-class materials prior to their TBL sessions, and were performed significantly better on iRAT when their performance was graded, even after controlling for cohort effects. Analysis of covariance demonstrated that performance on iRAT also appeared to affect performance on their examination scores. The results of the study suggest that grading has a positive effect on students' iRAT scores. Implications for TBL are discussed

    Effects of graded versus ungraded individual readiness assurance scores in team-based learning: A quasi-experimental study

    No full text
    Pre-class preparation is a crucial component of team-based learning (TBL). Lack of preparation hinders both individual learning and team performance during TBL. The purpose of the present study was to explore how the grading of the individual readiness assurance test (iRAT) can affect pre-class preparation, iRAT performance and performance in the end-of-year examination. Using a quasi-experimental design, Year 1 and 2 students’ download frequency for their pre-class materials, performance on iRAT and examination were examined under two conditions; (1) under which the iRAT was graded and (2) under which the iRAT was ungraded. Medical students (N = 220) from three cohorts were included in the study. Differences between both conditions were tested by means of six separate ANCOVAs, using medical school entry test scores as the covariate to account for potential cohort effects. Results revealed that students were downloading more pre-class materials prior to their TBL sessions, and were performed significantly better on iRAT when their performance was graded, even after controlling for cohort effects. Analysis of covariance demonstrated that performance on iRAT also appeared to affect performance on their examination scores. The results of the study suggest that grading has a positive effect on students’ iRAT scores. Implications for TBL are discussed
    corecore