22 research outputs found
A Hybrid Machine Translation Framework for an Improved Translation Workflow
Over the past few decades, due to a continuing surge in the amount of content being translated and ever increasing pressure to deliver high quality and high throughput translation, translation industries are focusing their interest on adopting advanced technologies such as machine translation (MT), and automatic post-editing (APE) in their translation workflows. Despite the progress of the technology, the roles of humans and machines essentially remain intact as MT/APE are moving from the peripheries of the translation field closer towards collaborative human-machine based MT/APE in modern translation workflows. Professional translators increasingly become post-editors correcting raw MT/APE output instead of translating from scratch which in turn increases productivity in terms of translation speed. The last decade has seen substantial growth in research and development activities on improving MT; usually concentrating on selected aspects of workflows starting from training data pre-processing techniques to core MT processes to post-editing methods. To date, however, complete MT workflows are less investigated than the core MT processes. In the research presented in this thesis, we investigate avenues towards achieving improved MT workflows. We study how different MT paradigms can be utilized and integrated to best effect. We also investigate how different upstream and downstream component technologies can be hybridized to achieve overall improved MT. Finally we include an investigation into human-machine collaborative MT by taking humans in the loop. In many of (but not all) the experiments presented in this thesis we focus on data scenarios provided by low resource language settings.Aufgrund des stetig ansteigenden Übersetzungsvolumens in den letzten Jahrzehnten und
gleichzeitig wachsendem Druck hohe Qualität innerhalb von kürzester Zeit liefern zu
müssen sind Übersetzungsdienstleister darauf angewiesen, moderne Technologien wie
Maschinelle Übersetzung (MT) und automatisches Post-Editing (APE) in den Übersetzungsworkflow
einzubinden. Trotz erheblicher Fortschritte dieser Technologien haben
sich die Rollen von Mensch und Maschine kaum verändert. MT/APE ist jedoch nunmehr
nicht mehr nur eine Randerscheinung, sondern wird im modernen Übersetzungsworkflow
zunehmend in Zusammenarbeit von Mensch und Maschine eingesetzt. Fachübersetzer
werden immer mehr zu Post-Editoren und korrigieren den MT/APE-Output, statt wie
bisher Übersetzungen komplett neu anzufertigen. So kann die Produktivität bezüglich
der Übersetzungsgeschwindigkeit gesteigert werden. Im letzten Jahrzehnt hat sich in den
Bereichen Forschung und Entwicklung zur Verbesserung von MT sehr viel getan: Einbindung
des vollständigen Übersetzungsworkflows von der Vorbereitung der Trainingsdaten
über den eigentlichen MT-Prozess bis hin zu Post-Editing-Methoden. Der vollständige
Übersetzungsworkflow wird jedoch aus Datenperspektive weit weniger berücksichtigt
als der eigentliche MT-Prozess. In dieser Dissertation werden Wege hin zum
idealen oder zumindest verbesserten MT-Workflow untersucht. In den Experimenten
wird dabei besondere Aufmertsamfit auf die speziellen Belange von sprachen mit geringen
ressourcen gelegt. Es wird untersucht wie unterschiedliche MT-Paradigmen verwendet
und optimal integriert werden können. Des Weiteren wird dargestellt wie unterschiedliche
vor- und nachgelagerte Technologiekomponenten angepasst werden können, um insgesamt
einen besseren MT-Output zu generieren. Abschließend wird gezeigt wie der Mensch in
den MT-Workflow intergriert werden kann. Das Ziel dieser Arbeit ist es verschiedene
Technologiekomponenten in den MT-Workflow zu integrieren um so einen verbesserten
Gesamtworkflow zu schaffen. Hierfür werden hauptsächlich Hybridisierungsansätze verwendet.
In dieser Arbeit werden außerdem Möglichkeiten untersucht, Menschen effektiv
als Post-Editoren einzubinden
Knowledge Patterns for the Web: extraction, tranformation and reuse
This thesis aims at investigating methods and software architectures for discovering what are the typical and frequently occurring structures used for organizing knowledge in the Web. We identify these structures as Knowledge Patterns (KPs). KP discovery needs to address two main research problems: the heterogeneity of sources, formats and semantics in the Web (i.e., the knowledge soup problem) and the difficulty to draw relevant boundary around data that allows to capture the meaningful knowledge with respect to a certain context (i.e., the knowledge boundary problem). Hence, we introduce two methods that provide different solutions to these two problems by tackling KP discovery from two different perspectives: (i) the
transformation of KP-like artifacts to KPs formalized as OWL2 ontologies; (ii) the bottom-up extraction of KPs by analyzing how data are organized in Linked Data. The two methods address the knowledge soup and boundary problems in different ways. The first method provides a solution to the two aforementioned problems that is based on a purely syntactic transformation step
of the original source to RDF followed by a refactoring step whose aim is to add semantics to RDF by select meaningful RDF triples. The second method allows to draw boundaries around RDF in Linked Data by analyzing type paths. A type path is a possible route through an RDF that takes into account the types associated to the nodes of a path.
Then we present K~ore, a software architecture conceived to be the basis for developing KP discovery systems and designed according to two software architectural styles, i.e, the Component-based and REST.
Finally we provide an example of reuse of KP based on Aemoo, an exploratory search tool which exploits KPs for performing entity summarization
Proceedings of the 17th Annual Conference of the European Association for Machine Translation
Proceedings of the 17th Annual Conference of the European Association for Machine Translation (EAMT
Enhancing Recommendations in Specialist Search Through Semantic-based Techniques and Multiple Resources
Information resources abound on the Internet, but mining these resources is a non-trivial task. Such abundance has raised the need to enhance services provided to users, such as recommendations. The purpose of this work is to explore how better recommendations can be provided to specialists in specific domains such as bioinformatics by introducing semantic techniques that reason through different resources and using specialist search techniques. Such techniques exploit semantic relations and hidden associations that occur as a result of the information overlapping among various concepts in multiple bioinformatics resources such as ontologies, websites and corpora. Thus, this work introduces a new method that reasons over different bioinformatics resources and then discovers and exploits different relations and information that may not exist in the original resources. Such relations may be discovered as a consequence of the information overlapping, such as the sibling and semantic similarity relations, to enhance the accuracy of the recommendations provided on bioinformatics content (e.g. articles). In addition, this research introduces a set of semantic rules that are able to extract different semantic information and relations inferred among various bioinformatics resources. This project introduces these semantic-based methods as part of a recommendation service within a content-based system. Moreover, it uses specialists' interests to enhance the provided recommendations by employing a method that is collecting user data implicitly. Then, it represents the data as adaptive ontological user profiles for each user based on his/her preferences, which contributes to more accurate recommendations provided to each specialist in the field of bioinformatics
Compositionality and Concepts in Linguistics and Psychology
cognitive science; semantics; languag
EG-ICE 2021 Workshop on Intelligent Computing in Engineering
The 28th EG-ICE International Workshop 2021 brings together international experts working at the interface between advanced computing and modern engineering challenges. Many engineering tasks require open-world resolutions to support multi-actor collaboration, coping with approximate models, providing effective engineer-computer interaction, search in multi-dimensional solution spaces, accommodating uncertainty, including specialist domain knowledge, performing sensor-data interpretation and dealing with incomplete knowledge. While results from computer science provide much initial support for resolution, adaptation is unavoidable and most importantly, feedback from addressing engineering challenges drives fundamental computer-science research. Competence and knowledge transfer goes both ways