478 research outputs found

    Moving Boundaries in Translation Studies

    Get PDF
    Translation is in motion. Both translation practice and translation studies (TS) have seen considerable innovation in recent decades, and we are currently witnessing a wealth of new approaches and concepts, some of which refect new translation phenomena, whereas others mirror new scholarly foci. Volunteer translation, crowdsourcing, virtual translator networks, transediting, and translanguaging are only some examples of practices and notions that are emerging on the scene alongside a renewed focus on well-established concepts that have traditionally been considered peripheral to the practice and study of translation: intralingual and intersemiotic translation are cases in point. At the same time, technological innovation and global developments such as the spread of English as a lingua franca are affecting wide areas of translation and, with it, translation studies. These trends are currently pushing or even crossing our traditional understandings of translation (studies) and its boundaries. The question is: how to deal with these developments? Some areas of the translation profession seem to respond by widening its borders, adding new practices such as technical writing, localisation, transcreation, or post-editing to their job portfolios, whereas others seem to be closing ranks. The same trend can be observed in the academic discipline: some branches of translation studies are eager to embrace all new developments under the TS umbrella, whereas others tend to dismiss (some of) them as irrelevant or as merely refecting new names for age-old practices. Translation is in motion. Technological developments, digitalisation and globalisation are among the many factors affecting and changing translation and, with it, translation studies. Moving Boundaries in Translation Studies offers a bird’s-eye view of recent developments and discusses their implications for the boundaries of the discipline. With 15 chapters written by leading translation scholars from around the world, the book analyses new translation phenomena, new practices and tools, new forms of organisation, new concepts and names as well as new scholarly approaches and methods. This is key reading for scholars, researchers and advanced students of translation and interpreting studies. The Open Access version of this book, available at http://www.taylorfrancis.com, has been made available under a Creative Commons Attribution-Non Commercial-No Derivatives 4.0 licens

    The Localisation of Video Games

    Get PDF
    The present thesis is a study of the translation of video games with a particular emphasis on the Spanish-English language pair, although other languages are brought into play when they offer a clearer illustration of a particular point in the discussion. On the one hand, it offers a descriptive analysis of the video game industry understood as a global phenomenon in entertainment, with the aim of understanding the norms governing present game development and publishing practices. On the other hand, it discusses particular translation issues that seem to be unique to these entertainment products due to their multichannel and polysemiotic nature, in which verbal and nonverbal signs are intimately interconnected in search of maximum game interactivity. Although this research positions itself within the theoretical framework of Descriptive Translation Studies, it actually goes beyond the mere accounting of current processes to propose changes whenever professional practice seems to be unable to rid itself of old unsatisfactory habits. Of a multidisciplinary nature, the present thesis is greatly informed by various areas of knowledge such as audiovisual translation, software localisation, computer assisted translation and translation memory tools, comparative literature, and video game production and marketing, amongst others. The conclusions are an initial breakthrough in terms of research into this new area, challenging some of the basic tenets current in translation studies thanks to its multidisciplinary approach, and its solid grounding on current game localisation industry practice. The results can be useful in order to boost professional quality and to promote the training of translators in video game localisation in higher education centres.Open Acces

    Moving Boundaries in Translation Studies

    Get PDF
    Translation is in motion. Both translation practice and translation studies (TS) have seen considerable innovation in recent decades, and we are currently witnessing a wealth of new approaches and concepts, some of which refect new translation phenomena, whereas others mirror new scholarly foci. Volunteer translation, crowdsourcing, virtual translator networks, transediting, and translanguaging are only some examples of practices and notions that are emerging on the scene alongside a renewed focus on well-established concepts that have traditionally been considered peripheral to the practice and study of translation: intralingual and intersemiotic translation are cases in point. At the same time, technological innovation and global developments such as the spread of English as a lingua franca are affecting wide areas of translation and, with it, translation studies. These trends are currently pushing or even crossing our traditional understandings of translation (studies) and its boundaries. The question is: how to deal with these developments? Some areas of the translation profession seem to respond by widening its borders, adding new practices such as technical writing, localisation, transcreation, or post-editing to their job portfolios, whereas others seem to be closing ranks. The same trend can be observed in the academic discipline: some branches of translation studies are eager to embrace all new developments under the TS umbrella, whereas others tend to dismiss (some of) them as irrelevant or as merely refecting new names for age-old practices. Translation is in motion. Technological developments, digitalisation and globalisation are among the many factors affecting and changing translation and, with it, translation studies. Moving Boundaries in Translation Studies offers a bird’s-eye view of recent developments and discusses their implications for the boundaries of the discipline. With 15 chapters written by leading translation scholars from around the world, the book analyses new translation phenomena, new practices and tools, new forms of organisation, new concepts and names as well as new scholarly approaches and methods. This is key reading for scholars, researchers and advanced students of translation and interpreting studies. The Open Access version of this book, available at http://www.taylorfrancis.com, has been made available under a Creative Commons Attribution-Non Commercial-No Derivatives 4.0 licens

    Semantic Systems. The Power of AI and Knowledge Graphs

    Get PDF
    This open access book constitutes the refereed proceedings of the 15th International Conference on Semantic Systems, SEMANTiCS 2019, held in Karlsruhe, Germany, in September 2019. The 20 full papers and 8 short papers presented in this volume were carefully reviewed and selected from 88 submissions. They cover topics such as: web semantics and linked (open) data; machine learning and deep learning techniques; semantic information management and knowledge integration; terminology, thesaurus and ontology management; data mining and knowledge discovery; semantics in blockchain and distributed ledger technologies

    Language technologies for a multilingual Europe

    Get PDF
    This volume of the series “Translation and Multilingual Natural Language Processing” includes most of the papers presented at the Workshop “Language Technology for a Multilingual Europe”, held at the University of Hamburg on September 27, 2011 in the framework of the conference GSCL 2011 with the topic “Multilingual Resources and Multilingual Applications”, along with several additional contributions. In addition to an overview article on Machine Translation and two contributions on the European initiatives META-NET and Multilingual Web, the volume includes six full research articles. Our intention with this workshop was to bring together various groups concerned with the umbrella topics of multilingualism and language technology, especially multilingual technologies. This encompassed, on the one hand, representatives from research and development in the field of language technologies, and, on the other hand, users from diverse areas such as, among others, industry, administration and funding agencies. The Workshop “Language Technology for a Multilingual Europe” was co-organised by the two GSCL working groups “Text Technology” and “Machine Translation” (http://gscl.info) as well as by META-NET (http://www.meta-net.eu)

    Text-detection and -recognition from natural images

    Get PDF
    Text detection and recognition from images could have numerous functional applications for document analysis, such as assistance for visually impaired people; recognition of vehicle license plates; evaluation of articles containing tables, street signs, maps, and diagrams; keyword-based image exploration; document retrieval; recognition of parts within industrial automation; content-based extraction; object recognition; address block location; and text-based video indexing. This research exploited the advantages of artificial intelligence (AI) to detect and recognise text from natural images. Machine learning and deep learning were used to accomplish this task.In this research, we conducted an in-depth literature review on the current detection and recognition methods used by researchers to identify the existing challenges, wherein the differences in text resulting from disparity in alignment, style, size, and orientation combined with low image contrast and a complex background make automatic text extraction a considerably challenging and problematic task. Therefore, the state-of-the-art suggested approaches obtain low detection rates (often less than 80%) and recognition rates (often less than 60%). This has led to the development of new approaches. The aim of the study was to develop a robust text detection and recognition method from natural images with high accuracy and recall, which would be used as the target of the experiments. This method could detect all the text in the scene images, despite certain specific features associated with the text pattern. Furthermore, we aimed to find a solution to the two main problems concerning arbitrarily shaped text (horizontal, multi-oriented, and curved text) detection and recognition in a low-resolution scene and with various scales and of different sizes.In this research, we propose a methodology to handle the problem of text detection by using novel combination and selection features to deal with the classification algorithms of the text/non-text regions. The text-region candidates were extracted from the grey-scale images by using the MSER technique. A machine learning-based method was then applied to refine and validate the initial detection. The effectiveness of the features based on the aspect ratio, GLCM, LBP, and HOG descriptors was investigated. The text-region classifiers of MLP, SVM, and RF were trained using selections of these features and their combinations. The publicly available datasets ICDAR 2003 and ICDAR 2011 were used to evaluate the proposed method. This method achieved the state-of-the-art performance by using machine learning methodologies on both databases, and the improvements were significant in terms of Precision, Recall, and F-measure. The F-measure for ICDAR 2003 and ICDAR 2011 was 81% and 84%, respectively. The results showed that the use of a suitable feature combination and selection approach could significantly increase the accuracy of the algorithms.A new dataset has been proposed to fill the gap of character-level annotation and the availability of text in different orientations and of curved text. The proposed dataset was created particularly for deep learning methods which require a massive completed and varying range of training data. The proposed dataset includes 2,100 images annotated at the character and word levels to obtain 38,500 samples of English characters and 12,500 words. Furthermore, an augmentation tool has been proposed to support the proposed dataset. The missing of object detection augmentation tool encroach to proposed tool which has the ability to update the position of bounding boxes after applying transformations on images. This technique helps to increase the number of samples in the dataset and reduce the time of annotations where no annotation is required. The final part of the thesis presents a novel approach for text spotting, which is a new framework for an end-to-end character detection and recognition system designed using an improved SSD convolutional neural network, wherein layers are added to the SSD networks and the aspect ratio of the characters is considered because it is different from that of the other objects. Compared with the other methods considered, the proposed method could detect and recognise characters by training the end-to-end model completely. The performance of the proposed method was better on the proposed dataset; it was 90.34. Furthermore, the F-measure of the method’s accuracy on ICDAR 2015, ICDAR 2013, and SVT was 84.5, 91.9, and 54.8, respectively. On ICDAR13, the method achieved the second-best accuracy. The proposed method could spot text in arbitrarily shaped (horizontal, oriented, and curved) scene text.</div

    Cultural impacts on web: An empirical comparison of interactivity in websites of South Korea and the United Kingdom

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and was awarded by Brunel UniversityThis thesis explores cultural differences on interactive design features used in websites of South Korea and the United Kingdom from the perspective of both: professional website designers and end-users. It also investigates how the use of interactive design features from different cultures change over time. Four interaction types on websites; User to Interface (U2I), User to Content (U2C), User to Provider (U2P), and User to User (U2U) interactivity, and three interaction types on blogs; Blogger to Interface (B2I), Blogger to Content (B2C) and Blogger to Blogger (B2B) interactivity have been identified. Four cultural dimensions were used for the theoretical base of this study based on which four hypotheses were proposed in relation to the interaction types identified above; (a) High versus Low Context cultures for U2I, (b) High versus Low Uncertainty Avoidance for U2C, (c) High versus Low Power Distance for U2P and (d) Individualism versus Collectivism for U2U interactivity, in order to discover the effects of national cultures on interactivity in websites. We derived our own interactivity dimensions and mapped them to the four interaction types for websites and three for blogs. Interactive design features were derived from interactivity dimensions and examined in our studies. The findings revealed that there have been some changes towards homogeneity in the use of interactive design features on charity websites between South Korea and United Kingdom although there is still evidence of some cultural differences. With regard to end-users’ perspective, the result show that the use of interactive design features of blogs may be influenced by culture but this is only within a certain context. The findings also provide a valuable indication that users interacting within the same blog service can be considered as being shared concerns rather than shared national location, thus create a particular type of community in which bloggers are affected by social influence so they adopt a shared set of value, preferences and style that would indicate almost a common social culture. As a result, the cultural differences derived from their country of origin do not have that much impact

    Challenges to knowledge representation in multilingual contexts

    Get PDF
    To meet the increasing demands of the complex inter-organizational processes and the demand for continuous innovation and internationalization, it is evident that new forms of organisation are being adopted, fostering more intensive collaboration processes and sharing of resources, in what can be called collaborative networks (Camarinha-Matos, 2006:03). Information and knowledge are crucial resources in collaborative networks, being their management fundamental processes to optimize. Knowledge organisation and collaboration systems are thus important instruments for the success of collaborative networks of organisations having been researched in the last decade in the areas of computer science, information science, management sciences, terminology and linguistics. Nevertheless, research in this area didn’t give much attention to multilingual contexts of collaboration, which pose specific and challenging problems. It is then clear that access to and representation of knowledge will happen more and more on a multilingual setting which implies the overcoming of difficulties inherent to the presence of multiple languages, through the use of processes like localization of ontologies. Although localization, like other processes that involve multilingualism, is a rather well-developed practice and its methodologies and tools fruitfully employed by the language industry in the development and adaptation of multilingual content, it has not yet been sufficiently explored as an element of support to the development of knowledge representations - in particular ontologies - expressed in more than one language. Multilingual knowledge representation is then an open research area calling for cross-contributions from knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences. This workshop joined researchers interested in multilingual knowledge representation, in a multidisciplinary environment to debate the possibilities of cross-fertilization between knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences applied to contexts where multilingualism continuously creates new and demanding challenges to current knowledge representation methods and techniques. In this workshop six papers dealing with different approaches to multilingual knowledge representation are presented, most of them describing tools, approaches and results obtained in the development of ongoing projects. In the first case, Andrés Domínguez Burgos, Koen Kerremansa and Rita Temmerman present a software module that is part of a workbench for terminological and ontological mining, Termontospider, a wiki crawler that aims at optimally traverse Wikipedia in search of domainspecific texts for extracting terminological and ontological information. The crawler is part of a tool suite for automatically developing multilingual termontological databases, i.e. ontologicallyunderpinned multilingual terminological databases. In this paper the authors describe the basic principles behind the crawler and summarized the research setting in which the tool is currently tested. In the second paper, Fumiko Kano presents a work comparing four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis presented by the author is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain. For that, datasets based on standardized pre-defined feature dimensions and values, which are obtainable from the UNESCO Institute for Statistics (UIS) have been used for the comparative analysis of the similarity measures. The purpose of the comparison is to verify the similarity measures based on the objectively developed datasets. According to the author the results demonstrate that the Bayesian Model of Generalization provides for the most effective cognitive model for identifying the most similar corresponding concepts existing for a targeted socio-cultural community. In another presentation, Thierry Declerck, Hans-Ulrich Krieger and Dagmar Gromann present an ongoing work and propose an approach to automatic extraction of information from multilingual financial Web resources, to provide candidate terms for building ontology elements or instances of ontology concepts. The authors present a complementary approach to the direct localization/translation of ontology labels, by acquiring terminologies through the access and harvesting of multilingual Web presences of structured information providers in the field of finance, leading to both the detection of candidate terms in various multilingual sources in the financial domain that can be used not only as labels of ontology classes and properties but also for the possible generation of (multilingual) domain ontologies themselves. In the next paper, Manuel Silva, António Lucas Soares and Rute Costa claim that despite the availability of tools, resources and techniques aimed at the construction of ontological artifacts, developing a shared conceptualization of a given reality still raises questions about the principles and methods that support the initial phases of conceptualization. These questions become, according to the authors, more complex when the conceptualization occurs in a multilingual setting. To tackle these issues the authors present a collaborative platform – conceptME - where terminological and knowledge representation processes support domain experts throughout a conceptualization framework, allowing the inclusion of multilingual data as a way to promote knowledge sharing and enhance conceptualization and support a multilingual ontology specification. In another presentation Frieda Steurs and Hendrik J. Kockaert present us TermWise, a large project dealing with legal terminology and phraseology for the Belgian public services, i.e. the translation office of the ministry of justice, a project which aims at developing an advanced tool including expert knowledge in the algorithms that extract specialized language from textual data (legal documents) and whose outcome is a knowledge database including Dutch/French equivalents for legal concepts, enriched with the phraseology related to the terms under discussion. Finally, Deborah Grbac, Luca Losito, Andrea Sada and Paolo Sirito report on the preliminary results of a pilot project currently ongoing at UCSC Central Library, where they propose to adapt to subject librarians, employed in large and multilingual Academic Institutions, the model used by translators working within European Union Institutions. The authors are using User Experience (UX) Analysis in order to provide subject librarians with a visual support, by means of “ontology tables” depicting conceptual linking and connections of words with concepts presented according to their semantic and linguistic meaning. The organizers hope that the selection of papers presented here will be of interest to a broad audience, and will be a starting point for further discussion and cooperation
    • …
    corecore