24 research outputs found
Exploring the role of phraseological knowledge in foreign language reading
Foreign language (FL) knowledge has been shown to contribute significantly to FL reading performance. Studies have contrasted the contribution of FL vocabulary and syntactic knowledge, following a dichotomous view of these components, producing mixed results. Despite the increasingly recognized formulaic nature of language, the contribution made by phraseological knowledge to reading ability has not been investigated systematically. This study examines the impact of a broader construct definition of linguistic knowledge – which includes a phraseological component – in explaining variance in reading performances. Test scores of 418 learners of English as a foreign language (EFL) were modeled in a structural equation model, showing that a phraseological knowledge measure outperformed traditional syntactic and vocabulary measures in predicting reading comprehension variance. Additional insights into the role of phraseological knowledge were gained through verbal protocol analysis of 15 EFL learners answering reading comprehension items that targeted the understanding of phrasal expressions within written context. The findings hint at an underestimated, but critical, role of phraseological knowledge in FL reading, and are relevant to both the assessment and the teaching of EFL ability
Development and initial validation of a diagnostic computer-adaptive profiler of vocabulary knowledge
Vocabulary knowledge is key to the successful use of any language skill (Nation & Webb, 2011) and learning to map a particular meaning to an L2 form for a great number of words is therefore crucial for learners of a foreign language. Vocabulary assessments can play a facilitating role in this learning process, which is why there is now an abundance of assessment tools to measure lexical knowledge. However, few of these tests have undergone sophisticated validation, even after their release into the public domain. Although vocabulary tests are used in numerous pedagogical and research settings, there has been “relatively little progress in the development of new vocabulary tests” (Webb & Sasao, 2013, p. 263). Instead, conventionalized traditions are being reiterated without questioning them. This PhD project has set out to address this gap of an innovative measure of vocabulary knowledge by developing a new diagnostic computer-adaptive measure of form-meaning link knowledge: The Vocabulary Knowledge Profiler.
The present test development project started from scratch by questioning the underlying assumptions and trying to make design decisions based not only on theoretical considerations but empirical evidence. In a series of studies, three major weaknesses of existing vocabulary tests were problematized: (1) selection of item formats, (2) sampling in terms of unit of counting, frequency bands and representativeness, and (3) the general lack of validation evidence and validation models. These issues were explored across four studies in this thesis to design a novel instrument and gather initial validation evidence for it along the way.
The first set of studies presented in this thesis investigated the usefulness and informativeness of different item formats for vocabulary tests and found in a comparison of four different formats that all formats show considerable error in measurement but the MC format may be the most useful because of its systematicity in overestimating scores. The second set of studies found support for the adoption of the lemma as an appropriate counting unit and for a new approach to frequency banding that takes into account the relative importance of frequency bands in terms of the coverage they provide. Based on these foundation studies, test specifications were drawn up and an item bank was created, which was subjected to a large scale trial to admit functioning items to an item pool for creating a computer-adaptive test. A study was conducted to compare two different computer-adaptive algorithms for implementation in the test design, suggesting that a “floor first” design would generate more consistent and representative score profiles. For initial validation evidence, a final study was then conducted to relate scores from the finished test to that of a reading comprehension measure. The findings of the studies presented throughout the thesis are then synthesized to produce an initial version of a validation argument in the structure of Bachman and Palmer’s (2010) Assessment Use Argument to outline both the necessary areas for further research before the launch of the test as well as the collected validation evidence to date that builds a tentative argument that the Vocabulary Knowledge Profiler and of the diagnostic decisions that are made based on its results and use are beneficial to English as a foreign language (EFL) learners and EFL teachers for classroom learning and teaching
Development and initial validation of a diagnostic computer-adaptive profiler of vocabulary knowledge
Vocabulary knowledge is key to the successful use of any language skill (Nation & Webb, 2011) and learning to map a particular meaning to an L2 form for a great number of words is therefore crucial for learners of a foreign language. Vocabulary assessments can play a facilitating role in this learning process, which is why there is now an abundance of assessment tools to measure lexical knowledge. However, few of these tests have undergone sophisticated validation, even after their release into the public domain. Although vocabulary tests are used in numerous pedagogical and research settings, there has been “relatively little progress in the development of new vocabulary tests” (Webb & Sasao, 2013, p. 263). Instead, conventionalized traditions are being reiterated without questioning them. This PhD project has set out to address this gap of an innovative measure of vocabulary knowledge by developing a new diagnostic computer-adaptive measure of form-meaning link knowledge: The Vocabulary Knowledge Profiler.
The present test development project started from scratch by questioning the underlying assumptions and trying to make design decisions based not only on theoretical considerations but empirical evidence. In a series of studies, three major weaknesses of existing vocabulary tests were problematized: (1) selection of item formats, (2) sampling in terms of unit of counting, frequency bands and representativeness, and (3) the general lack of validation evidence and validation models. These issues were explored across four studies in this thesis to design a novel instrument and gather initial validation evidence for it along the way.
The first set of studies presented in this thesis investigated the usefulness and informativeness of different item formats for vocabulary tests and found in a comparison of four different formats that all formats show considerable error in measurement but the MC format may be the most useful because of its systematicity in overestimating scores. The second set of studies found support for the adoption of the lemma as an appropriate counting unit and for a new approach to frequency banding that takes into account the relative importance of frequency bands in terms of the coverage they provide. Based on these foundation studies, test specifications were drawn up and an item bank was created, which was subjected to a large scale trial to admit functioning items to an item pool for creating a computer-adaptive test. A study was conducted to compare two different computer-adaptive algorithms for implementation in the test design, suggesting that a “floor first” design would generate more consistent and representative score profiles. For initial validation evidence, a final study was then conducted to relate scores from the finished test to that of a reading comprehension measure. The findings of the studies presented throughout the thesis are then synthesized to produce an initial version of a validation argument in the structure of Bachman and Palmer’s (2010) Assessment Use Argument to outline both the necessary areas for further research before the launch of the test as well as the collected validation evidence to date that builds a tentative argument that the Vocabulary Knowledge Profiler and of the diagnostic decisions that are made based on its results and use are beneficial to English as a foreign language (EFL) learners and EFL teachers for classroom learning and teaching
Towards a comprehensive, empirical model of language assessment literacy across stakeholder groups:Developing the Language Assessment Literacy Survey
While scholars have proposed different models of language assessment literacy (LAL), these models have mostly comprised prescribed sets of components based on principles of good practice. As such, these models remain theoretical in nature, and represent the perspectives of language assessment researchers rather than stakeholders themselves. The project from which the current study is drawn was designed to address this issue through an empirical investigation of the LAL needs of different stakeholder groups. Central to this aim was the development of a rigorous and comprehensive survey which would illuminate the dimensionality of LAL and generate profiles of needs across these dimensions. This paper reports on the development of an instrument designed for this purpose: the Language Assessment Literacy Survey. We first describe the expert review and pretesting stages of survey development. Then we report on the results of an exploratory factor analysis based on data from a large-scale administration (N = 1086), where respondents from a range of stakeholder groups across the world judged the LAL needs of their peers. Finally, selected results from the large-scale administration are presented to illustrate the survey’s utility, specifically comparing the responses of language teachers, language testing/assessment developers and language testing/assessment researchers
The CEFR Companion Volume: Opportunities and challenges for language assessment
Over the last two decades, the Common European Framework of Reference for languages (CEFR) has become the most influential tool of language policy-making in Europe and beyond. The publication of the companion volume (CEFR-CV) constitutes a new milestone for teaching, learning and assessing languages, and is a most timely reaction to common criticism of the framework. In addition to new scales, descriptors and competence levels, the CEFR-CV introduces new modalities and broadens the scope for mediation and plurilingual/cultural communication, thereby updating and extending previous construct definitions for increasingly digitized and diverse societies. Despite the CEFR’s major impact on the language testing industry, there is thus far scarce literature on how to operationalize the CEFR-CV for assessment with the expanded framework. In addition to the huge potential for innovative assessment tasks and formats, this raises questions with regard to construct definitions, task development, test quality assurance, and rating practices. This paper will focus on six noteworthy innovations of the CEFR-CV and discuss the opportunities and challenges for assessment: (1) departure from the native-speaker norm, (2) stronger consideration of digital communication, (3) interlingual mediation, (4) intralingual mediation, (5) phonological awareness, and (6) the provision of richer descriptions of lower-level learner competencies
Wenn das Wetter den Preis bestimmt
Obwohl das Wetter für die Nutzenstiftung eines Angebots in mehreren Branchen eine zentrale Rolle spielt, wird dieser Aspekt in der Preisfindung bisher kaum berücksichtigt. Mit einem meteo-dynamischen Pricing wird dies aufgegriffen und der Angebotspreis auf Basis von Wetterdaten fortlaufend variiert. Wie die vorliegende Studie zeigt, generiert dieser Ansatz wirksame Kaufimpulse und stösst auf eine hohe Kundenakzeptanz