423 research outputs found

    Translation methods and experience : a comparative analysis of human translation and post-editing with students and professional translators

    Get PDF
    While the benefits of using post-editing for technical texts have been more or less acknowledged, it remains unclear whether post-editing is a viable alternative to human translation for more general text types. In addition, we need a better understanding of both translation methods and how they are performed by students as well as professionals, so that pitfalls can be determined and translator training can be adapted accordingly. In this article, we aim to get a better understanding of the differences between human translation and post-editing for newspaper articles. Processes were registered by means of eye tracking and keystroke logging, which allows us to study translation speed, cognitive load, and the usage of external resources. We also look at the final quality of the product as well as translators' attitude towards both methods of translation

    Eye-tracking as a measure of cognitive effort for post-editing of machine translation

    Get PDF
    The three measurements for post-editing effort as proposed by Krings (2001) have been adopted by many researchers in subsequent studies and publications. These measurements comprise temporal effort (the speed or productivity rate of post-editing, often measured in words per second or per minute at the segment level), technical effort (the number of actual edits performed by the post-editor, sometimes approximated using the Translation Edit Rate metric (Snover et al. 2006), again usually at the segment level), and cognitive effort. Cognitive effort has been measured using Think-Aloud Protocols, pause measurement, and, increasingly, eye-tracking. This chapter provides a review of studies of post-editing effort using eye-tracking, noting the influence of publications by Danks et al. (1997), and O’Brien (2006, 2008), before describing a single study in detail. The detailed study examines whether predicted effort indicators affect post-editing effort and results were previously published as Moorkens et al. (2015). Most of the eye-tracking data analysed were unused in the previou

    Measuring the difficulty of text translation: The combination of text-focused and translator-oriented approaches

    Get PDF
    This paper explores the impact of text complexity on translators’ subjective perception of translation difficulty and on their cognitive load. Twenty-six MA translation students from a UK university were asked to translate three English texts with different complexity into Chinese. Their eye movements were recorded by an eye-tracker, and their cognitive load was self-assessed with a Likert scale before translation and NASA-TLX scales after translation. The results show that: (i) the intrinsic complexity measured by readability, word frequency and non-literalness was in line with the results received from informants’ subjective assessment of translation difficulty; (ii) moderate and positive correlations existed between most items in the self-assessments and the indicator (fixation and saccade durations) obtained by the eye-tracking measurements; and (iii) the informants’ cognitive load as indicated by fixation and saccade durations (but not for pupil size) increased significantly in two of the three texts along with the increase in source text complexity

    Some thoughts about the conceptual / procedural distinction in translation: a key-logging and eye-tracking study of processing effort

    Get PDF
    This article builds on the conceptual / procedural distinction postulated by Relevance Theory to investigate processing effort in translation task execution. Drawing on relevance-theoretic assumptions, it assumes that instances related to procedural encodings will require more effortful processing not only in relation to the time spent on the task but also in terms of product indicators such as seconds per word and number of micro translation units per word. Drawing on key-logging and eye-tracking data, the article shows that there are statistically significant differences when conceptual and procedural encodings are analysed in selected areas of interest, with instances related to procedural encoding requiring more processing effort to be translated. The results are relevant for translation process research as they signal to where processing effort is predominantly located. Additionally, the discussion also contributes to validating experimentally some claims postulated by Relevance Theory.Este artículo se basa en la distinción entre codificaciones conceptuales y procedimentales postulada por la Teoría de la Relevancia para investigar el esfuerzo de procesamiento en tareas de traducción. Con base en esta teoría, se asume que los casos relacionados con codificaciones procedimentales requieren más esfuerzo de procesamiento no sólo en relación al tiempo empleado en la tarea, sino también en términos de indicadores de producto, tales como segundos por palabra y número de micro unidades de traducción por palabra. Utilizando datos de registro de teclado y ratón, así como datos de seguimiento ocular, el artículo muestra que existen diferencias estadísticamente significativas entre las codificaciones conceptuales y las procedimentales cuando se analizan áreas de interés seleccionadas. Los casos relacionados con la codificación procedimental requieren más esfuerzo de procesamiento para traducirlos. Los resultados son relevantes para la investigación del proceso de traducción, ya que indican dónde se concentra predominantemente el esfuerzo de procesamiento al traducir. Además, el debate contribuye a validar experimentalmente algunos principios postulados por la Teoría de la Relevancia.Research funded by CNPq, the Brazilian Research Council (grant 307964/2011-6); and FAPEMIG, the Research Agency of the State of Minas Gerais (grants SHA/PPM-00495-12 and SHA/PPM-00087-12)

    Translators’ Use of Digital Resources during Translation

    Get PDF
    This paper presents the findings from a study on translators’ use of digital resources during the translation process. Eye tracking data and screen recording data from 18 professional translators are analysed in order to 1) examine how much time translators spend on digital resource consultation compared with translation drafting and translation revision, 2) examine how eye movements differ between translation drafting, revision and digital resource consultation and 3) investigate what types of digital resources are used by translators. The findings demonstrate that digital resource consultation constitutes a considerable amount of the translation process. The findings also show longer fixations and larger pupils during resource consultation, indicating heavier cognitive load, and finally the study identifies considerable variation in the use of resources between translators

    A translation robot for each translator? : a comparative study of manual translation and post-editing of machine translations: process, quality and translator attitude

    Get PDF
    To keep up with the growing need for translation in today's globalised society, post-editing of machine translation is increasingly being used as an alternative to regular human translation. While presumably faster than human translation, it is still unsure whether the quality of a post-edited text is comparable to the quality of a human translation, especially for general text types. In addition, there is a lack of understanding of the post-editing process, the effort involved, and the attitude of translators towards it. This dissertation contains a comparative analysis of post-editing and human translation by students and professional translators for general text types from English into Dutch. We study process, product, and translators' attitude in detail. We first conducted two pretests with student translators to try possible experimental setups and to develop a translation quality assessment approach suitable for a fine-grained comparative analysis of machine-translated texts, post-edited texts, and human translations. For the main experiment, we examined students and professional translators, using a combination of keystroke logging tools, eye tracking, and surveys. We used both qualitative analyses and advanced statistical analyses (mixed effects models), allowing for a multifaceted analysis. For the process analysis, we looked at translation speed, cognitive processing by means of eye fixations, the usage of external resources and its impact on overall time. For the product analysis, we looked at overall quality, frequent error types, and the impact of using external resources on quality. The attitude analysis contained questions about perceived usefulness, perceived speed, perceived quality of machine translation and post-editing, and the translation method that was perceived as least tiring. One survey was conducted before the experiment, the other after, so we could detect changes in attitude after participation. In two more detailed analyses, we studied the impact of machine translation quality on various types of post-editing effort indicators, and on the post-editing of multi-word units. We found that post-editing is faster than human translation, and that both translation methods lead to products of comparable overall quality. The more detailed error analysis showed that post-editing leads to somewhat better results regarding adequacy, and human translation leads to better results regarding acceptability. The most common errors for both translation methods are meaning shifts, logical problems, and wrong collocations. Fixation data indicated that post-editing was cognitively less demanding than human translation, and that more attention was devoted to the target text than to the source text. We found that fewer resources are consulted during post-editing than during human translation, although the overall time spent in external resources was comparable. The most frequently used external resources were Google Search, concordancers, and dictionaries. Spending more time in external resources, however, did not lead to an increase in quality. Translators indicated that they found machine translation useful, but they preferred human translation and found it more rewarding. Perceptions about speed and quality were mixed. Most participants believed post-editing to be at least as fast and as good as human translation, but barely ever better. We further discovered that different types of post-editing effort indicators were impacted by different types of machine translation errors, with coherence issues, meaning shifts, and grammatical and structural issues having the greatest effect. HTER, though commonly used, does not correlate well with more process-oriented post-editing effort indicators. Regarding the post-editing of multi-word units, we suggest 'contrast with the target language' as a useful new way of classifying multi-word units, as contrastive multi-word units were much harder to post-edit. In addition, we noticed that research strategies for post-editing multi-word units lack efficiency. Consulting external resources did lead to an increased quality of post-edited multi-word units, but a lot of time was spent in external resources when this was not necessary. Interestingly, the differences between human translation and post-editing usually outweighed the differences between students and professionals. Students did cognitively process texts differently, having longer fixation durations on the source text during human translation, and more fixations on the target text during post-editing, whereas professional translators' fixation behaviour remained constant. For the usage of external resources, only the time spent in dictionaries was higher for students than for professional translators, the usage of other resources was comparable. Overall quality was comparable for students and professionals, but professionals made fewer adequacy errors. Deletions were more noticeable for students than for professional translators in both methods of translation, and word sense issues were more noticeable for professional translators than for students when translating from scratch. Surprisingly, professional translators were often more positive about post-editing than students, believing they could produce products of comparable quality with both methods of translation. Students in particular struggled with the cognitive processing of meaning shifts, and they spent more time in pauses than professional translators. Some of the key contributions of this dissertation to the field of translation studies are the fact that we compared students and professional translators, developed a fine-grained translation quality assessment approach, and used a combination of state-of-the-art logging tools and advanced statistical methods. The effects of experience in our study were limited, and we suggest looking at specialisation and translator confidence in future work. Our guidelines for translation quality assessment can be found in the appendix, and contain practical instructions for use with brat, an open-source annotation tool. The experiment described in this dissertation is also the first to integrate Inputlog and CASMACAT, making it possible to include information on external resources in the CASMACAT logging files, which can be added to the CRITT Translation Process Research Database. Moving beyond the methodological contributions, our findings can be integrated in translation teaching, machine translation system development, and translation tool development. Translators need hands-on post-editing experience to get acquainted with common machine translation errors, and students in particular need to be taught successful strategies to spot and solve adequacy issues. Post-editors would greatly benefit from machine translation systems that made fewer coherence errors, meaning shift errors, and grammatical and structural errors. If visual clues are included in a translation tool (e.g., potentially problematic passages or polysemous words), these should be added to the target text. Tools could further benefit from integration with commonly used external resources, such as dictionaries. In the future, we wish to study the translation and post-editing process in even more detail, taking pause behaviour and regressions into account, as well as look at the passages participants perceived as the most difficult to translate and post-edit. We further wish to gain an even better understanding of the usage of external resources, by looking at the types of queries and by linking queries back to source and target text words. While our findings are limited to the post-editing and human translation of general text types from English into Dutch, we believe our methodology can be applied to different settings, with different language pairs. It is only by studying both processes in many different situations and by comparing findings that we will be able to develop tools and create courses that better suit translators' needs. This, in turn, will make for better, and happier, future generations of translators

    An Eye-tracking and Key-logging Study

    Get PDF
    This study is an empirical investigation of translators’ allocation of cognitive resources, and its specific aim is to identify predictable behaviours and patterns of uniformity in translators’ allocation of cognitive resources in translation. The study falls within the process-oriented translation paradigm and within the more general field of cognitive psychology. Based on models of working memory, attentional control, language comprehension and language production, a theoretical framework was developed on which hypotheses were formulated and evaluated. The study’s empirical investigation fell into three major analyses, which each dealt with one aspect of translators’ allocation of cognitive resources: distribution of cognitive resources, management of cognitive resources and cognitive load. Three indicators were identified: total attention duration (TA duration measured in seconds) indicates the distribution of cognitive resources; attention unit duration (AU duration measured in milliseconds) indicates the amount of time allocated between two attention shifts; and pupil size (measured in millimetres) indicates cognitive load, i.e. workload on working memory...

    Tracking Eye Movements in Sight Translation – the comprehension process in interpreting

    Get PDF
    [[abstract]]While the three components of interpreting have been identified as comprehension, reformulation, and production, the process of how these components occur has remained relatively unexplored. The present study employed the eye-tracking method to investigate the process of sight translation, a mode of interpreting in which the input is written rather than oral. The research focused especially on the comprehension component in sight translation, addressed the validity of the horizontal and the vertical perspectives of interpreting, and ascertained whether reading ahead exists in sight translation. Eye movements of 18 interpreting students were recorded during silent reading of a Chinese speech, reading aloud a Chinese speech, and Chinese to English sight translation. Since silent reading consists of the comprehension component while reading aloud consists of the comprehension and production components, the two tasks served as a basis of comparison for investigating comprehension in sight translation. The findings suggested that sight translation and silent reading were no different in the initial stage of reading, as reflected by similar first fixation duration, single fixation duration, gaze duration, fixation probability, and refixation probability. Sight translation only began to demonstrate differences from silent reading after first-pass reading, as shown by higher rereading time and rereading rate. Also, reading ahead occurred in 72.8% of cases in this experiment, indicating the overlap between reading and oral production in Chinese to English sight translation. The results supported the vertical perspective in interpreting as well as the claim of reading ahead. Implications for interpreter training are to attach more importance to paraphrasing skills and to focus more on the similarities between sight translation and simultaneous interpreting.

    Working Styles of Student Translators in Revision and Post-editing: an Empirical-Experimental Study with Eye-tracking, Keylogging and Cue-based Retrospection

    Get PDF
    In today’s translation profession, being skilful at revision (including self-revision and other-revision) and post-editing tasks is becoming essential for translators. The exploration of the working styles of student translators in the revision and post-editing processes is vital in helping us to understand the nature of these tasks, and may help in improving pedagogy. Drawing on theories from translation-related studies, cognitive psychology, and text comprehension and production, the aims of this research were to: (1) identify the basic types of reading and typing activity (physical activities) of student translators in the processes of revision and post-editing, and to measure statistically and compare the duration of these activities within and across tasks; (2) identify the underlying purposes (mental activities) behind each type of reading and typing activity; (3) categorise the basic types of working style of student translators and compare the frequency of use of each working style both within and across tasks; (4) identify the personal working styles of student translators in carrying out different tasks, and (5) identify the most efficient working style in each task. Eighteen student translators from Durham University, with Chinese as L1 and English as L2, were invited to participate in the experiment. They were asked to translate, self-revise, other-revise and post-edit three comparable texts in Translog-II with the eye-tracking plugin activated. A cue-based retrospective interview was carried out after each session to collect the student translators’ subjective and conscious data for qualitative analysis. The raw logging data were transformed into User Activity Data and were analysed both quantitatively and qualitatively. This study identified seven types of reading and typing activity in the processes of self-revision, other-revision and post-editing. Three revision phases were defined and four types of working style were recognised. The student translators’ personal working styles were compared in all three tasks. In addition, a tentative model of their cognitive processes in self-revision, other-revision and post-editing was developed, and the efficiency of the four working styles in each task was tested
    corecore