742 research outputs found

    Linking tests of English for academic purposes to the CEFR: the score user’s perspective

    Get PDF
    The Common European Framework of Reference for Languages (CEFR) is widely used in setting language proficiency requirements, including for international students seeking access to university courses taught in English. When different language examinations have been related to the CEFR, the process is claimed to help score users, such as university admissions staff, to compare and evaluate these examinations as tools for selecting qualified applicants. This study analyses the linking claims made for four internationally recognised tests of English widely used in university admissions. It uses the Council of Europe’s (2009) suggested stages of specification, standard setting, and empirical validation to frame an evaluation of the extent to which, in this context, the CEFR has fulfilled its potential to “facilitate comparisons between different systems of qualifications.” Findings show that testing agencies make little use of CEFR categories to explain test content; represent the relationships between their tests and the framework in different terms; and arrive at conflicting conclusions about the correspondences between test scores and CEFR levels. This raises questions about the capacity of the CEFR to communicate competing views of a test construct within a coherent overarching structure

    Usability and digital inclusion: standards and guidelines

    Get PDF
    This article aims at discussing e-government website usability in relation to concerns about digital inclusion. E-government web design should consider all aspects of usability, including those that make it more accessible to all. Traditional concerns of social exclusion are being superseded by fears that lack of digital competence and information literacy may result in dangerous digital exclusion. Usability is considered as a way to address this exclusion and should therefore incorporate inclusion and accessibility guidelines. This article makes an explicit link between usability guidelines and digital inclusion and reports on a survey of local government web presence in Portugal

    Does a speaking task affect second language comprehensibility?

    Get PDF
    The current study investigated task effects on listener perception of second language (L2) comprehensibility (ease of understanding). Sixty university-level adult speakers of English from 4 first language (L1) backgrounds (Chinese, Romance, Hindi, Farsi), with 15 speakers per group, were recorded performing 2 tasks (IELTS long-turn speaking task, TOEFL iBT integrated listening/reading and speaking task). The speakers’ audio recordings were evaluated using continuous sliding scales by 10 native English listeners for comprehensibility as well as for 10 linguistic variables drawn from the domains of pronunciation, fluency, lexis, grammar, and discourse. In the IELTS task, comprehensibility was associated solely with pronunciation and fluency categories (specifically, segmentals, word stress, rhythm, and speech rate), with the Farsi group being the only exception. However, in the cognitively more demanding TOEFL iBT integrated task, in addition to pronunciation and fluency variables, comprehensibility was also linked to several categories at the level of grammar, lexicon, and discourse for all groups. In both tasks, the relative strength of obtained associations also varied as a function of the speakers’ L1. Results overall suggest that both task and speakers’ L1 play important roles in determining ease of understanding for the listener, with implications for pronunciation teaching in mixed L1 classrooms and for operationalizing the construct of comprehensibility in assessments
    corecore