The implications of a newly developed oral test in Business English: Are we heading in the right direction?

Abstract

The paper presents an overview of the testing and assessment standardization process at the Language Centre of Masaryk University. It exemplifies the process by analysing a two-year development of the C1 Business English oral test administered to students at the Faculty of Economics and Administration (FEA), which resulted in the formation of a completely new testing procedure. The transition from the original teacher–student interview format to a monological discourse and a peer-to-peer discussion, with the roles of the interlocutor and rater split between two teachers using analytic rating scales to evaluate performance, is described, along with its implications on the validity and reliability of assessment. Students’ perception of the test importance is also examined. The second part deals with the analysis of a questionnaire on feedback collected from students taking the test in Spring 2014. The preliminary look into the merit of the efforts exerted indicates a noticeable enhancement in quality, reliability, validity and prestige of the oral test.Příspěvek představí proces standardizace hodnocení jazykových kompetencí studentů Masarykovy univerzity, konkrétně vývoj ústní části zkoušky obchodní angličtiny na úrovni C1 na Ekonomicko-správní fakultě. Druhá část analyzuje dotazník pro zpětnou vazbu, vyplněný studenty, kteří zkoušku absolvovali na jaře 2014. Předběžné hodnocení celé této snahy naznačuje, že se jedná o posun směrem k vyšší kvalitě, spolehlivosti, validitě i prestiži ústní části zkoušky.The paper presents an overview of the testing and assessment standardization process at the Language Centre of Masaryk University. It exemplifies the process by analysing a two-year development of the C1 Business English oral test administered to students at the Faculty of Economics and Administration (FEA), which resulted in the formation of a completely new testing procedure. The transition from the original teacher–student interview format to a monological discourse and a peer-to-peer discussion, with the roles of the interlocutor and rater split between two teachers using analytic rating scales to evaluate performance, is described, along with its implications on the validity and reliability of assessment. Students’ perception of the test importance is also examined. The second part deals with the analysis of a questionnaire on feedback collected from students taking the test in Spring 2014. The preliminary look into the merit of the efforts exerted indicates a noticeable enhancement in quality, reliability, validity and prestige of the oral test

    Similar works