7 research outputs found

    Trek (Winnie Rust)

    Get PDF
    No abstrac

    Koning van Katoren.

    Get PDF

    Mieke rock uit.

    Get PDF
    Mieke rock uit.Jana du Plessis. Kaapstad: Tafelberg, 2012. 224 pp. ISBN: 978-0-624-05457-3.Daan Dreyer se blou geranium.Derick B. van der Walt. Kaapstad: Kwela Boeke, 2012. 224 pp. ISBN: 978-0-7957-0416-1

    Donatello en Volksie (Marion Erskine)

    Get PDF

    Assessing spoken-language educational interpreting : measuring up and measuring right

    Get PDF
    CITATION: Foster, L. & Cupido, A. 2017. Assessing spoken-language educational interpreting : measuring up and measuring right. Stellenbosch Papers in Linguistics Plus, 53:119-132, doi:10.5842/53-0-736.The original publication is available at http://spilplus.journals.ac.za/This article, primarily, presents a critical evaluation of the development and refinement of the assessment instrument used to assess formally the spoken-language educational interpreters at Stellenbosch University (SU). Research on interpreting quality has tended to produce varying perspectives on what quality might entail (cf. Pöchhacker 1994, 2001; Kurz 2001; Kalina 2002; Pradas Marcías 2006; Grbić 2008; Moser-Mercer 2008; Alonso Bacigalupe 2013). Consequently, there is no ready-made, universally accepted or applicable mechanism for assessing quality. The need for both an effective assessment instrument and regular assessments at SU is driven by two factors: Firstly, a link exists between the quality of the service provided and the extent to which that service remains sustainable. Plainly put, if the educational interpreting service wishes to remain viable, the quality of the interpreting product needs to be more than merely acceptable. Secondly, and more important, educational interpreters play an integral role in students’ learning experience at SU by relaying the content of lectures. Interpreting quality could potentially have serious ramifications for students, and therefore quality assessment is imperative. Two assessment formats are used within the interpreting service, each with a different focus. The development and refinement of the assessment instrument for formal assessments discussed in this article have been ongoing since 2011. The main aim has been to devise an instrument that could be used to assess spoken-language interpreting in the university classroom. Complicating factors have included the various ways in which communication occurs in the classroom and the different sociocultural backgrounds and levels of linguistic proficiency of users. The secondary focus is on the nascent system of peer assessment. This system and the various incarnations of the peer assessment instrument are discussed. Linkages (and the lack thereof) between the two systems are briefly described.http://spilplus.journals.ac.za/pub/article/view/736Publisher's versio

    Quality-assessment expectations and quality-assessment reality in educational interpreting : an exploratory case study

    Get PDF
    CITATION: Foster, L. 2014. Quality-assessment expectations and quality-assessment reality in educational interpreting : an exploratory case study. Stellenbosch Papers in Linguistics Plus, 43:87-102, doi:10.5842/43-0-207.The original publication is available at http://spilplus.journals.ac.zaThis article focuses on data obtained from three separate studies conducted during a four-year period at Stellenbosch University, a higher education institution in South Africa. All three studies centred on the simultaneous interpretation of undergraduate lectures. Various data sets were used to examine whether there would be a discrepancy between what lecturers in a particular academic department emphasised when they first considered the feasibility of this type of educational interpreting, and what they actually focused on when assessing the interpreters’ performance. Discrepancies and correlations in the quality criteria identified by lecturers were examined against a rubric taken from existing literature on interpreter assessment (notably that of Kurz (2002)). Using this information and augmenting it with comments from a similar assessment of the same material undertaken by experienced interpreters, these discrepancies and correlations are briefly discussed. Given the exploratory nature of this case study, few recommendations are made. However, the fact that the data from this study seem – in broad terms – to agree with studies conducted in the field of conference interpreting would seem to indicate that the discrepancy between stated and actual quality assessment criteria is real, and will require much more detailed study in an educational interpreting setting.Publisher's versio
    corecore