9 research outputs found

    "Why do medical students fail in studies?" A case study

    Get PDF
    Revealing reasons as to why medical students struggle in their studies would help to develop corresponding student support for students. As early remedial actions could prevent further attrition and dropouts, the purpose of this study is to investigate possible reasons for medical students failing their Year 1 studies

    Representational competence of Form Four science students on basic chemical concepts / Sim Joong Hiong

    Get PDF
    The general purpose of this study was to investigate Form four science students’ representational competence on basic chemical concepts. The main aims of the study were: (i) to investigate students’ understanding of basic chemical concepts, (ii) to evaluate their understanding of chemical representations, (iii) to assess their representational competence in chemistry, and (iv) to examine the influence of selected cognitive variables on their representational competence. A total of 411 Form four science students from seven urban secondary schools in Perak participated in this study. Data was obtained from seven instruments consisting of five paper-and-pencil tests, one questionnaire and interviews. The Statistical Package for the Social Sciences (SPSS) was used to analyze quantitative data collected. The main findings of this study were: Mean scores for the Test of Chemical Concepts (TCC), Test of Chemical Representations (TCR) and Test of Representational Competence (TRC) were respectively 13.68 (45.60%), 18.63 (51.75%), and 16.90 (42.25%). Students with a high level of understanding of (a) chemical concepts, and (b) chemical representations, had significantly higher overall level of representational competence compared to both the medium and the low groups, at p<0.001. However, students with medium and low levels of understanding of (a) chemical concepts, and (b) chemical representations, showed no significant difference in their overall levels of representational competence. Percent alternative conceptions for 18 of the 30 items in the TCC exceeded 50%; mean or percent mean alternative conceptions for all five categories of the most basic chemical concepts exceeded 50%. Percent alternative conceptions for 13 of the 36 items in the TCR exceeded 50%; the content domain with the highest percent mean alternative conception were `the three levels of representation of matter’ (71.93%), Percent difficulty for 23 of the 40 items in the TRC exceeded 50%; the category with the highest percent mean difficulty was the ability to translate between different representations across levels (78.83%). All the nine participants in the interviews were unfamiliar with the term `chemical representations’. However, participants from the 1High group gave correct examples of chemical representations while participants from the 2Low group totally had no idea about chemical representations. Participants from the Low group held a macroscopic view of matter, focused on the surface features of representations and used representations as depictions. Their ability to interpret or generate representations of chemical concepts, and to translate between representations, is limited; Participants from the 3Medium group had a microscopic view of matter. Microscopic terms were used only when prompted, and chemical representations were sometimes incorrectly used; Participants from the High group had both a macroscopic view and a microscopic view of matter, able to use microscopic terms appropriately and spontaneously, could generate submicroscopic representations using correct chemical representations, and able to translate fluently between representations. None of the nine participants in the semi-structured interviews could use multiple levels of representations in their description. The representational competence levels of the nine participants were: three at level 1, three at level 2, two at level 3, and one at Level 4. The regression model with three independent variables explains almost 71% of the variance of representational competence (prior knowledge ≈58%, developmental level ≈14%). The best predictor of representational competence is `understanding of chemical concepts’ or prior knowledge I, which alone accounts for 55.5% of the variance. The regression model was a good fit. The overall relationship was significant, [F (3, 188) = 156.405, p < 0.001]. Arising from the findings, some implications and recommendation were discussed, and further research suggested

    Representational competence in chemistry: A comparison between students with different levels of understanding of basic chemical concepts and chemical representations

    No full text
    Representational competence is defined as “skills in interpreting and using representations”. This study attempted to compare students’ of high, medium, and low levels of understanding of (1) basic chemical concepts, and (2) chemical representations, in their representational competence. A total of 411 Form 4 science students (mean age = 16 years) from seven urban secondary schools in Malaysia participated in this study. Data were collected from three instruments namely the test of chemical concepts, the test of chemical representations, and the test of representational competence. The Statistical Package for the Social Sciences was used to analyze the data. Findings showed students with a high level of understanding of (1) chemical concepts and (2) chemical representations had significantly higher overall level of representational competence compared to both the medium and the low groups, at p < 0.001. However, students with medium and low levels of understanding of (1) chemical concepts and (2) chemical representations showed no significant difference in their overall levels of representational competence. Findings also showed that students’ overall level of representational competence had a higher dependence on their level of understanding of chemical concepts than their level of understanding of chemical representations

    Thinking about thinking: changes in first-year medical students’ metacognition and its relation to performance

    No full text
    Background: Studies have shown the importance of metacognition in medical education. Metacognitive skills consist of two dimensions: knowledge of metacognition and regulation of metacognition. Aim: This study hypothesizes that the knowledge and regulation of metacognition is significantly different at the beginning and end of the academic year, and a correlation exists between the two dimensions of metacognitive skills with academic performance. Methods: The Metacognitive Skills Inventory comprising 52 Likert-scale items was administered to 159 first-year medical students at the University of Malaya. Students’ year-end results were used to measure their academic performance. Results: A paired sample t-test indicated no significant difference for knowledge of metacognition at the beginning and end of the academic year. A paired sample t-test revealed significant difference for regulation of metacognition at the beginning and end of the academic year. A very strong correlation was found between the two dimensions of metacognition. The correlation between knowledge and regulation of metacognition with students’ academic result was moderate. Conclusions: The improvement in students’ metacognitive regulation and the moderate correlation between knowledge and regulation of metacognition with academic performance at the end of the academic year indicate the probable positive influence of the teaching and learning activities in the medical program

    Development of an instrument to measure medical students’ perceptions of the assessment environment: initial validation

    No full text
    Introduction: Assessment environment, synonymous with climate or atmosphere, is multifaceted. Although there are valid and reliable instruments for measuring the educational environment, there is no validated instrument for measuring the assessment environment in medical programs. This study aimed to develop an instrument for measuring students’ perceptions of the assessment environment in an undergraduate medical program and to examine the psychometric properties of the new instrument. Method: The Assessment Environment Questionnaire (AEQ), a 40-item, four-point (1=Strongly Disagree to 4=Strongly Agree) Likert scale instrument designed by the authors, was administered to medical undergraduates from the authors’ institution. The response rate was 626/794 (78.84%). To establish construct validity, exploratory factor analysis (EFA) with principal component analysis and varimax rotation was conducted. To examine the internal consistency reliability of the instrument, Cronbach's α was computed. Mean scores for the entire AEQ and for each factor/subscale were calculated. Mean AEQ scores of students from different academic years and sex were examined. Results: Six hundred and eleven completed questionnaires were analysed. EFA extracted four factors: feedback mechanism (seven items), learning and performance (five items), information on assessment (five items), and assessment system/procedure (three items), which together explained 56.72% of the variance. Based on the four extracted factors/subscales, the AEQ was reduced to 20 items. Cronbach's α for the 20-item AEQ was 0.89, whereas Cronbach's α for the four factors/subscales ranged from 0.71 to 0.87. Mean score for the AEQ was 2.68/4.00. The factor/subscale of ‘feedback mechanism’ recorded the lowest mean (2.39/4.00), whereas the factor/subscale of ‘assessment system/procedure’ scored the highest mean (2.92/4.00). Significant differences were found among the AEQ scores of students from different academic years. Conclusions: The AEQ is a valid and reliable instrument. Initial validation supports its use to measure students’ perceptions of the assessment environment in an undergraduate medical program

    Students’ performance in the different clinical skills assessed in OSCE: what does it reveal?

    No full text
    Introduction: The purpose of this study was to compare students’ performance in the different clinical skills (CSs) assessed in the objective structured clinical examination. Methods: Data for this study were obtained from final year medical students’ exit examination (n=185). Retrospective analysis of data was conducted using SPSS. Means for the six CSs assessed across the 16 stations were computed and compared. Results: Means for history taking, physical examination, communication skills, clinical reasoning skills (CRSs), procedural skills (PSs), and professionalism were 6.25±1.29, 6.39±1.36, 6.34±0.98, 5.86±0.99, 6.59±1.08, and 6.28±1.02, respectively. Repeated measures ANOVA showed there was a significant difference in the means of the six CSs assessed [F(2.980, 548.332)=20.253, p<0.001]. Pairwise multiple comparisons revealed significant differences between the means of the eight pairs of CSs assessed, at p<0.05. Conclusions: CRSs appeared to be the weakest while PSs were the strongest, among the six CSs assessed. Students’ unsatisfactory performance in CRS needs to be addressed as CRS is one of the core competencies in medical education and a critical skill to be acquired by medical students before entering the workplace. Despite its challenges, students must learn the skills of clinical reasoning, while clinical teachers should facilitate the clinical reasoning process and guide students’ clinical reasoning development

    Are first year medical students ready for OSCE?

    No full text
    Objective: This study aimed to assess First Year medical students' readiness for OSCE. Design: This is a retrospective study where secondary data comprising both quantitative and qualitative data, were analysed. Materials and Methods: Three cohorts of First Year medical students (n = 454) took a 5-station OSCE. Two categories of tasks were assessed. Category A assessed patient and doctor interaction while Category B assessed clinical skills. A student must be scored as satisfactory in at least four out of five stations for a pass in Category A and at least three out of five stations for a pass in Category B. A pass in both Categories A and B is required to pass the OSCE. For each cohort, overall passing percentage, as well as passing percentage for Category A and Category B of each station, was computed. Examiners' feedback on students' performance during OSCE for each station was examined. Feedback from students regarding the OSCE was also sought. Results: For Cohort 2013, Cohort 2014 and Cohort 2015, 174/179 (97.21%), 118/129 (91.47%) and 140/147 (95.24%) of students passed the OSCE respectively. Cohort 2013, Cohort 2014 and Cohort 2015 recorded mean percent pass of (95.31%, 88.83%), (89.15%, 83.10%) and (98.36%, 84.52%) for Category A and Category B respectively. Examiners' feedback was generally favourable. Feedback from students was mixed but constructive and generally encouraging. Conclusions: Based on students' performance in the OSCE as well as feedback from both examiners and students, First Year medical students appeared to be ready for OSCE assessment
    corecore