248 research outputs found
Recommended from our members
Integrating constructive feedback in personalised e-learning
When using e-learning material some students progress readily, others have difficulties. In a traditional classroom the teacher would identify those with difficulties and direct them to additional resources. This support is not easily available within e-learning. A new approach to providing constructive feedback is developed that will enable an e-learning system to identify areas of weakness and provide guidance on further study. The approach is based on the tagging of learning material with appropriate keywords that indicate the contents. Thus if a student performs poorly on an assessment on topic X, there is a need to suggest further study of X and participation in activities related to X such as forums. As well as supporting the learner this type of constructive feedback can also inform other stakeholders. For example a tutor can monitor the progress of a cohort; an instructional designer can monitor the quality of learning objects in facilitating the appropriate knowledge across many learners
The method of educational assessment affects childrenâs neural processing and performance: behavioural and fMRI Evidence.
Standardised educational assessments are now widespread, yet their development has given comparatively more consideration to what to assess than how to optimally assess studentsâ competencies. Existing evidence from behavioural studies with children and neuroscience studies with adults suggest that the method of assessment may affect neural processing and performance, but current evidence remains limited. To investigate the impact of assessment methods on neural processing and performance in young children, we used functional magnetic resonance imaging to identify and quantify the neural correlates during performance across a range of current approaches to standardised spelling assessment. Results indicated that childrenâs test performance declined as the cognitive load of assessment method increased. Activation of neural nodes associated with working memory further suggests that this performance decline may be a consequence of a higher cognitive load, rather than the complexity of the content. These findings provide insights into principles of assessment (re)design, to ensure assessment results are an accurate reflection of studentsâ true levels of competency
Joining the dots: Conditional pass and programmatic assessment enhances recognition of problems with professionalism and factors hampering student progress
<p>Abstract</p> <p>Background</p> <p>Programmatic assessment that looks across a whole year may contribute to better decisions compared with those made from isolated assessments alone. The aim of this study is to describe and evaluate a programmatic system to handle student assessment results that is aligned not only with learning and remediation, but also with defensibility. The key components are standards based assessments, use of "Conditional Pass", and regular progress meetings.</p> <p>Methods</p> <p>The new assessment system is described. The evaluation is based on years 4-6 of a 6-year medical course. The types of concerns staff had about students were clustered into themes alongside any interventions and outcomes for the students concerned. The likelihoods of passing the year according to type of problem were compared before and after phasing in of the new assessment system.</p> <p>Results</p> <p>The new system was phased in over four years. In the fourth year of implementation 701 students had 3539 assessment results, of which 4.1% were Conditional Pass. More in-depth analysis for 1516 results available from 447 students revealed the odds ratio (95% confidence intervals) for failure was highest for students with problems identified in more than one part of the course (18.8 (7.7-46.2) p < 0.0001) or with problems with professionalism (17.2 (9.1-33.3) p < 0.0001). The odds ratio for failure was lowest for problems with assignments (0.7 (0.1-5.2) NS). Compared with the previous system, more students failed the year under the new system on the basis of performance during the year (20 or 4.5% compared with four or 1.1% under the previous system (p < 0.01)).</p> <p>Conclusions</p> <p>The new system detects more students in difficulty and has resulted in less "failure to fail". The requirement to state conditions required to pass has contributed to a paper trail that should improve defensibility. Most importantly it has helped detect and act on some of the more difficult areas to assess such as professionalism.</p
- âŚ