12 research outputs found

    Dataset LESLLA Writing

    No full text
    These data show the results of 6 groups of learners of Dutch as a second language on the same high-stakes A2 language test.- Lower educated learners in a slow track, at CEFR levels A1 (240 hours of instruction) and A2 (480 hours of instruction)- Learners with a secondary education background in a standard track, at CEFR levels A1 (120 hours of instruction) and A2 (240 hours of instruction)- Learners with a tertiary education background in a fast track, at CEFR levels A1 (60 hours of instruction) and A2 (120 hours of instruction)The test contains 4 tasks, corrected for content and (task 2, 3, 4) for a number of linguistic criteria. The linguistic criteria are scored according to CEFR-based criteria.The data show that the different track types are differentially effective in terms of test score. The increase in instruction hours does not allow lower educated learners to achieve equivalent test performance levels.THIS DATASET IS ARCHIVED AT DANS/EASY, BUT NOT ACCESSIBLE HERE. TO VIEW A LIST OF FILES AND ACCESS THE FILES IN THIS DATASET CLICK ON THE DOI-LINK ABOV

    Revisiting rating scale development for rater-mediated language performance assessments: Modelling construct and contextual choices made by scale developers

    No full text
    Rating scale development in the field of language assessment is often considered in dichotomous ways: It is assumed to be guided either by expert intuition or by drawing on performance data. Even though quite a few authors have argued that rating scale development is rarely so easily classifiable, this dyadic view has dominated language testing research for over a decade. In this paper we refine the dominant model of rating scale development by drawing on a corpus of 36 studies identified in a systematic review. We present a model showing the different sources of scale construct in the corpus. In the discussion, we argue that rating scale designers, just like test developers more broadly, need to start by determining the purpose of the test, the relevant policies that guide test development and score use, and the intended score use when considering the design choices available to them. These include considering the impact of such sources on the generalizability of the scores, the precision of the post-test predictions that can be made about test takers’ future performances and scoring reliability. The most important contributions of the model are that it gives rating scale developers a framework to consider prior to starting scale development and validation activities

    Low print literacy and its representation in research and policy

    Get PDF
    This paper constitutes an edited transcript of two online panels, conducted with four scholars whose complementary expertise regarding print literacy and migration offers a thought-provoking and innovative window on the representation of print literacy in applied linguistic research and in migration policy. The panel members are experts on language policy, literacy, proficiency and human capital research. Together, they address a range of interrelated matters: the constructs of language proficiency and literacy (with significant implication for assessment), the idea of literacy as human capital or as a human right, the urgent need for policy literacy among applied linguists, and the responsibility of applied linguistics in the literacy debate
    corecore