46 research outputs found

    The Impact of a SIG on Assessment Literacy

    Get PDF
    A major aim of professional associations is to provide opportunities for professionals to interact with others, share ideas and develop in their chosen profession. Professional associations exist to provide specialized networking and development opportunities to a specific profession, group of individuals or field of study. To promote and support specialized research and communication, smaller subgroups within an association are often chartered or developed. These subgroups are typically known as Special Interest Groups. According to Jacob et al. (2013), association members join SIGs because they want to go deeper into a specialized content area and they enjoy networking with others who ‘speak the same language.’ The TESOL Arabia Testing, Assessment and Evaluation SIG (TAE SIG) has focused their professional development activities on an important trend in the field, that of language assessment literacy (LAL). Language assessment literacy has been a critical topic in English language teaching since the late 1990s. Unfortunately, this is mainly due to the fact that so many English language teachers are not assessment literate. In other words, many English language teachers lack the knowledge and skills to write effective language tests, evaluate the effectiveness of their tests, and use their test results in meaningful ways. The purpose of this chapter is to critically examine the status of LAL in the Middle East and North Africa (MENA) region and report on activities that the TAE SIG has implemented to increase LAL

    The method of educational assessment affects children’s neural processing and performance: behavioural and fMRI Evidence.

    Get PDF
    Standardised educational assessments are now widespread, yet their development has given comparatively more consideration to what to assess than how to optimally assess students’ competencies. Existing evidence from behavioural studies with children and neuroscience studies with adults suggest that the method of assessment may affect neural processing and performance, but current evidence remains limited. To investigate the impact of assessment methods on neural processing and performance in young children, we used functional magnetic resonance imaging to identify and quantify the neural correlates during performance across a range of current approaches to standardised spelling assessment. Results indicated that children’s test performance declined as the cognitive load of assessment method increased. Activation of neural nodes associated with working memory further suggests that this performance decline may be a consequence of a higher cognitive load, rather than the complexity of the content. These findings provide insights into principles of assessment (re)design, to ensure assessment results are an accurate reflection of students’ true levels of competency

    Joining the dots: Conditional pass and programmatic assessment enhances recognition of problems with professionalism and factors hampering student progress

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Programmatic assessment that looks across a whole year may contribute to better decisions compared with those made from isolated assessments alone. The aim of this study is to describe and evaluate a programmatic system to handle student assessment results that is aligned not only with learning and remediation, but also with defensibility. The key components are standards based assessments, use of "Conditional Pass", and regular progress meetings.</p> <p>Methods</p> <p>The new assessment system is described. The evaluation is based on years 4-6 of a 6-year medical course. The types of concerns staff had about students were clustered into themes alongside any interventions and outcomes for the students concerned. The likelihoods of passing the year according to type of problem were compared before and after phasing in of the new assessment system.</p> <p>Results</p> <p>The new system was phased in over four years. In the fourth year of implementation 701 students had 3539 assessment results, of which 4.1% were Conditional Pass. More in-depth analysis for 1516 results available from 447 students revealed the odds ratio (95% confidence intervals) for failure was highest for students with problems identified in more than one part of the course (18.8 (7.7-46.2) p < 0.0001) or with problems with professionalism (17.2 (9.1-33.3) p < 0.0001). The odds ratio for failure was lowest for problems with assignments (0.7 (0.1-5.2) NS). Compared with the previous system, more students failed the year under the new system on the basis of performance during the year (20 or 4.5% compared with four or 1.1% under the previous system (p < 0.01)).</p> <p>Conclusions</p> <p>The new system detects more students in difficulty and has resulted in less "failure to fail". The requirement to state conditions required to pass has contributed to a paper trail that should improve defensibility. Most importantly it has helped detect and act on some of the more difficult areas to assess such as professionalism.</p

    Test

    Full text link

    Assessment to Inform Science Education

    Full text link

    Student Learning Assessment

    Full text link
    corecore