19,213 research outputs found
Improving the translation environment for professional translators
When using computer-aided translation systems in a typical, professional translation workflow, there are several stages at which there is room for improvement. The SCATE (Smart Computer-Aided Translation Environment) project investigated several of these aspects, both from a human-computer interaction point of view, as well as from a purely technological side.
This paper describes the SCATE research with respect to improved fuzzy matching, parallel treebanks, the integration of translation memories with machine translation, quality estimation, terminology extraction from comparable texts, the use of speech recognition in the translation process, and human computer interaction and interface design for the professional translation environment. For each of these topics, we describe the experiments we performed and the conclusions drawn, providing an overview of the highlights of the entire SCATE project
Technology for large-scale translation of clinical practice guidelines : a pilot study of the performance of a hybrid human and computer-assisted approach
Background: The construction of EBMPracticeNet, a national electronic point-of-care information platform in Belgium, was initiated in 2011 to optimize quality of care by promoting evidence-based decision-making. The project involved, among other tasks, the translation of 940 EBM Guidelines of Duodecim Medical Publications from English into Dutch and French. Considering the scale of the translation process, it was decided to make use of computer-aided translation performed by certificated translators with limited expertise in medical translation. Our consortium used a hybrid approach, involving a human translator supported by a translation memory (using SDL Trados Studio), terminology recognition (using SDL Multiterm termbases) from medical termbases and support from online machine translation. This has resulted in a validated translation memory which is now in use for the translation of new and updated guidelines.
Objective: The objective of this study was to evaluate the performance of the hybrid human and computer-assisted approach in comparison with translation unsupported by translation memory and terminology recognition. A comparison was also made with the translation efficiency of an expert medical translator.
Methods: We conducted a pilot trial in which two sets of 30 new and 30 updated guidelines were randomized to one of three groups. Comparable guidelines were translated (a) by certificated junior translators without medical specialization using the hybrid method (b) by an experienced medical translator without this support and (c) by the same junior translators without the support of the validated translation memory. A medical proofreader who was blinded for the translation procedure, evaluated the translated guidelines for acceptability and adequacy. Translation speed was measured by recording translation and post-editing time. The Human Translation Edit Rate was calculated as a metric to evaluate the quality of the translation. A further evaluation was made of translation acceptability and adequacy.
Results: The average number of words per guideline was 1,195 and the mean total translation time was 100.2 min/1,000 words. No meaningful differences were found in the translation speed for new guidelines. The translation of updated guidelines was 59 min/1,000 words faster (95% CI 2-115; P=.044) in the computer-aided group. Revisions due to terminology accounted for one third of the overall revisions by the medical proofreader.
Conclusions: Use of the hybrid human and computer-aided translation by a non-expert translator makes the translation of updates of clinical practice guidelines faster and cheaper because of the benefits of translation memory. For the translation of new guidelines there was no apparent benefit in comparison with the efficiency of translation unsupported by translation memory (whether by an expert or non-expert translator
Recommended from our members
Language engineering - a champion for European culture
Language is key to culture. It is a direct cultural medium as well as a means of recording and providing access to non-lingual elements of culture. Language is also fundamental to a sense of cultural identity. For this reason, it is vital, in a changing Europe, that we preserve the multi-lingual character of our society in order to move successfully towards closer co-operation at a political, economic, and social level.
Language engineering is the application of knowledge of language to the development of computer software which can recognise, understand, interpret, and generate human language in all its forms.
The paper provides a high level view of the ‘state of the art’ in language engineering and indicates ways in which it will have a profound impact on our culture in the future. It shows how advances in language engineering are an important aid in maintaining cultural diversity in a multi-lingual European society, while enabling the development of social cohesion across cultural and national divides. It addresses issues raised by the prospect of the Multi-lingual Information Society, including education, human communication with technology and information management, as well as aspects of digital cities such as tele-presence in digital libraries, virtual art galleries and electronic museums. The paper raises the issue of language as a factor in cultural domination, showing the contribution that language engineering can make towards countering it.
The paper also raises a number of controversial issues concerning the likely benefits arising from the ways in which language is likely to influence the culture of Europe
Translation technologies. Scope, tools and resources
Translation technologies constitute an important new field of interdisciplinary
study lying midway between computer science and translation. Its development in the
professional world will largely depend on its academic progress and the effective
introduction of translation technologies in the translators training curriculum. In this
paper different approaches to the subject are examined so as to provide us with a basis
on which to conduct an internal analysis of the field of Translation technologies and to
structure its content. Following criteria based on professional practice and on the
idiosyncrasy of the computer tools and resources that play a part in translation activity,
we present our definition of Translation technologies and the field classified in five
block
TectoMT – a deep-linguistic core of the combined Chimera MT system
Chimera is a machine translation system that combines the TectoMT deep-linguistic core with phrase-based MT system Moses. For English–Czech pair it also uses the Depfix post-correction system. All the components run on Unix/Linux platform and are open source (available from Perl repository CPAN and the LINDAT/CLARIN repository). The main website is https://ufal.mff.cuni.cz/tectomt. The development is currently supported by the QTLeap 7th FP project (http://qtleap.eu)
Translators' requirements for translation technologies: user study on translation tools
Another major concern of the survey respondents was the quality of machine translation and its usefulness for creating draft translations for post- editing. In this direction, a part of this dissertation is dedicated to evaluation of machine translation, and investigation of the post-editing process. The findings of these studies showed which machine translation errors are easier to post-edit, which can be of practical use for improving the post-editing workflow.This dissertation investigates the needs of professional translators regarding trans- lation technologies with the aim of suggesting ways to improve these technologies from the users’ point of view. It mostly covers the topics of computer-assisted translation (CAT) tools, machine translation and terminology management. In particular, the work presented here examines three main questions: 1) what kind of tools do translators need to increase their productivity and income, 2) do ex- isting translation tools satisfy translators’ needs, 3) how can translation tools be improved to cater to these needs. The dissertation is composed of nine previously published articles, which are included in the Appendix, while the methodology used and the results obtained in these studies are summarised in the main body of the dissertation.
The task of identifying user needs was approached from three different perspectives: 1) eliciting translators’ needs by means of a user survey, 2) evaluation of existing CAT systems, and 3) analysis of the process of post-editing of ma- chine translation. The data from the user survey was analysed using quantitative and qualitative data analysis techniques. The post-editing process was studied through quantitative measures of time and technical effort, as well as through the qualitative study of the actual edits.The survey results demonstrated that the two crucial characteristics of CAT software were usability and functionality. It also helped to distinguish the features translators find most useful in their software, such as support for many different document formats, concordance search, autopropagation and autosuggest functions. Based on these preferences, an evaluation scheme for CAT software was developed. Various ways of improving CAT software usability and functionality were proposed, including making better use of textual corpora techniques and providing different versions of software with respect to the required level of functionality
- …