33 research outputs found

    The Italian Retranslations of Virginia Woolf's To the Lighthouse: A Corpus-based Literary Analysis

    Get PDF
    The research goal is to clarify how and to what degree the modernist style and features of Virginia Woolf’s To the Lighthouse are rendered in the eleven retranslations into Italian of this novel and whether these can be characterised as modernist novels themselves. A suitable methodology has been developed, which is drawn on the existing corpus methods for descriptive translation studies. Empirical evidence of the differences between target texts have been found, which in many cases have been interpreted as due to the translators’ voice or thumb-prints. The present research uses a systematic literary comparison of the retranslations by adopting a mixed-method and bottom-up (inductive) approach by developing an empirical corpus approach. This corpus is specifically tailored to identify and study both linguistic and non-linguistic modernist features throughout the texts such as stream of consciousness-indirect interior monologue and free indirect speech. All occurrences will be analysed in this thesis in the computations of inferential and comparative statistics such as lexical variety and lexical frequency. The target texts were digitised, and the resulting text files were then analysed by using a bespoke, novel computer program, which is capable of the mentioned functions not provided by commercially available software such as WordSmith Tools and WMatrix. Not only did this methodology enable performing in-depth explorations of micro- and macro-textual features, but it also allowed a mixed-method approach combining close-reading qualitative analysis with systematic quantitative comparisons. The obtained empirical results identify a progressive source-text orientation of the retranslations of Woolf’s style in a few aspects of a few target texts. The translators’ presence affected all the eleven target texts in register and style under the influence of the Italian translation norms usually attributed to the translation of literary classics

    Επεξεργασία Στρατιωτικής Ορολογίας στο Σύστημα UNL του έργου UNDL Οργανισμού Ηνωμένων Εθνών (ΟΗΕ) και σε Πολύγλωσσες Εφαρμογές

    Get PDF
    Η παρούσα εργασία διερευνά τους παράγοντες που συμβάλλουν σε σφάλματα και επιπλοκές, οι οποίες προκύπτουν κατά την επεξεργασία της στρατιωτικής ορολογίας σε πολύγλωσσες στρατιωτικές εφαρμογές που περιλαμβάνουν γραπτά ή προφορικά κείμενα. Η εν λόγω ανάλυση εστιάζεται στη Γλώσσα UNL (Universal Networking Language), έργου του Οργανισμού UNDL (Universal Networking Digital Language Foundation) των Ηνωμένων Εθνών και ιδιαίτερα στις λέξεις UWs (Universal Words) που αποτελούν στρατιωτικούς όρους. Οι UWs που αποτελούν βασική μονάδα της Γλώσσας UNL, επιτρέπουν την πρόσβαση σε πολύγλωσσα έγγραφα και πληροφορίες, δίνοντας έτσι την δυνατότητα στους χρήστες να επικοινωνούν στη μητρική τους γλώσσα. Τα έγγραφα και οι πληροφορίες που επεξεργάζονται στο πλαίσιο της Γλώσσας UNL μπορεί να υποβληθούν σε επεξεργασία από πολύγλωσσες τεχνολογίες όπως είναι η Εξαγωγή Πληροφορίας, η Ανάκτηση Πληροφορίας, η Άντληση Δεδομένων και η Μηχανική Μετάφραση που χρησιμοποιούνται σε στρατιωτικές εφαρμογές. Η αναγνώριση, πρόβλεψη και ανάλυση των λαθών και των επιπλοκών που προκύπτουν κατά την επεξεργασία της στρατιωτικής ορολογίας έχουν ιδιαίτερη σημασία στις εν λόγω εφαρμογές. Η πρόβλεψη και η επίλυση λαθών κατά την επεξεργασία της στρατιωτικής ορολογίας είναι επίσης ζωτικής σημασίας για τις Οντολογίες και τις Ελεγχόμενες Γλώσσες που χρησιμοποιούνται σε εφαρμογές Επικοινωνίας Ανθρώπου – Μηχανής και στα Διαλογικά Συστήματα και σε κάθε εφαρμογή που αφορά την επεξεργασία Φυσικής Γλώσσας. Για την αποδοτικότητα και τη χρηστικότητα των πολύγλωσσων στρατιωτικών εφαρμογών, προτείνονται γενικές στρατηγικές πολλαπλών χρήσεων με σκοπό την επίλυση των συγκεκριμένων τύπων λαθών, στοχεύοντας στην ορθότητα, ακρίβεια, ταχύτητα και αποτελεσματικότητα, σε συνδυασμό με την εξειδικευμένη γνώση και τις απαιτήσεις του στρατιωτικού προσωπικού. Οι προτεινόμενες στρατηγικές περιλαμβάνουν τον καθορισμό μίας υπογλώσσας με σκοπό τη χρήση της στη σύγχρονη στρατιωτική ορολογία και στοχεύουν στην ανάπτυξη ενός γενικού μοντέλου που θα αφορά στις διεπαφές, τους συντάκτες/επιμελητές κειμένων και τα γλωσσικά πρότυπα σε πολύγλωσσες στρατιωτικές εφαρμογές.The present research investigates the factors contributing to errors and complications in processing military terminology in multilingual military applications involving written or spoken texts. The analysis concerned is focused on the Universal Language Framework (UNL) of the United Nations UNDL Project, in particular in Universal Words constituting military terms. Universal Words (UWs), constituting a basic unit of the UNL Framework, enable access to multilingual documents and information, allowing Users to communicate in their native language. Documents and information processed within the UNL Framework may be subjected to multilingual Information Extraction, Information Retrieval and other types of Data Mining and Machine Translation (MT) used in military applications. The identification, prediction and resolution of errors and complications in processing military terminology are of special importance in the applications concerned. Predicting and resolving errors in processing military terminology is also crucial in ontologies and Controlled Languages used in various types of Human-Computer Interaction applications, Dialog Systems and Spoken Language Processing. For the efficiency and usability of multilingual military applications, general multi-purpose strategies are proposed for the resolution of specified error types, targeting to correctness, preciseness, speed and efficiency combined with expert knowledge and the User Requirements of military personnel. The proposed strategies include the definition of a sublanguage framework for modern military terminology and are targeted towards the development of a general model for interfaces, editors and language models for multilingual military applications

    Exploring Intersemiotic Translation Models -- A Case Study of Ang Lee’s Films

    Get PDF
    Roman Jakobson’s notion of intersemiotic translation provides an opportunity for translation studies scholars to respond to the broad move from the dominance of writing to the dominance of the medium of the image. Due to the linguistic bias of translation studies, however, intersemiotic translation has yet to receive systematic attention. The present research is thus designed to respond to this under-discussed and yet growing phenomenon in the age of digitalization and aims to contribute an understanding of intersemiotic translation by focusing on the case of film as one of the most notable instances of intersemiotic translation. Though intersemiotic translation enables film to be discussed through the prism of translation studies, past research in this area, which perceives film as a transmission from verbal signs to non-verbal signs, oversimplifies the mechanism of film-making. This comes at a price, however, since the researchers neglect the fact that other parameters of film language, such as cinematography, performance, setting and sound are governed by audio-visual patterns that are included in film’s other prior materials. To remedy this deficiency, a rigorous investigation of these audio-visual patterns has been carried out, and answers are provided for the research question: How do intersemiotic translators translate? In this dissertation, these quality-determining audio-visual patterns are considered as the film-maker’s intersemiotic translation models, which provide translation solutions for verbal text segments in the screenplay. Using elements from Even-Zohar’s polysystem theory and Rey Chow’s theory of cultural translation, a multi-levelled system of intersemiotic translation is proposed, comprised of a hierarchy of two levels: cultural and semiotic. In this system, each intersemiotic translation model is considered to be the result of a cross-level combination that relates to a specific type of semiotic system within a specific cultural system, employed in one or several parameters of film ‘language’. These intersemiotic translation models and their functions are explored through case studies of three of Ang Lee’s films, namely, Crouching Tiger, Hidden Dragon, Lust, Caution, and Life of Pi

    Semantic discovery and reuse of business process patterns

    Get PDF
    Patterns currently play an important role in modern information systems (IS) development and their use has mainly been restricted to the design and implementation phases of the development lifecycle. Given the increasing significance of business modelling in IS development, patterns have the potential of providing a viable solution for promoting reusability of recurrent generalized models in the very early stages of development. As a statement of research-in-progress this paper focuses on business process patterns and proposes an initial methodological framework for the discovery and reuse of business process patterns within the IS development lifecycle. The framework borrows ideas from the domain engineering literature and proposes the use of semantics to drive both the discovery of patterns as well as their reuse

    Specialised Languages and Multimedia. Linguistic and Cross-cultural Issues

    Get PDF
    none2noThis book collects academic works focusing on scientific and technical discourse and on the ways in which this type of discourse appears in or is shaped by multimedia products. The originality of this book is to be seen in the variety of approaches used and of the specialised languages investigated in relation to multimodal and multimedia genres. Contributions will particularly focus on new multimodal or multimedia forms of specialised discourse (in institutional, academic, technical, scientific, social or popular settings), linguistic features of specialised discourse in multimodal or multimedia genres, the popularisation of specialised knowledge in multimodal or multimedia genres, the impact of multimodality and multimediality on the construction of scientific and technical discourse, the impact of multimodality/multimediality in the practice and teaching of language, the impact of multimodality/multimediality in the practice and teaching of translation, new multimedia modes of knowledge dissemination, the translation/adaptation of scientific discourse in multimedia products. This volume contributes to the theory and practice of multimodal studies and translation, with a specific focus on specialized discourse.Rivista di Classe A - Volume specialeopenManca E., Bianchi F.Manca, E.; Bianchi, F

    Grammalepsy

    Get PDF
    This book is available as open access through the Bloomsbury Open Access programme and is available on www.bloomsburycollections.com. Collecting and recontextualizing writings from the last twenty years of John Cayley's research-based practice of electronic literature, Grammalepsy introduces a theory of aesthetic linguistic practice developed specifically for the making and critical appreciation of language art in digital media. As he examines the cultural shift away from traditional print literature and the changes in our culture of reading, Cayley coins the term “grammalepsy” to inform those processes by which we make, understand, and appreciate language. Framing his previous writings within the overall context of this theory, Cayley eschews the tendency of literary critics and writers to reduce aesthetic linguistic making-even when it has multimedia affordances-to “writing.” Instead, Cayley argues that electronic literature and digital language art allow aesthetic language makers to embrace a compositional practice inextricably involved with digital media, which cannot be reduced to print-dependent textuality

    Advancing Fine-Grained Emotion Recognition in Short Text

    Get PDF
    Advanced emotion recognition in text is essential for developing intelligent affective applications, which can recognize, react upon, and analyze users' emotions. Our particular motivation for solving this problem lies in large-scale analysis of social media data, such as those generated by Twitter users. Summarizing users' emotions can enable better understandings of their reactions, interests, and motivations. We thus narrow the problem to emotion recognition in short text, particularly tweets. Another driving factor of our work is to enable discovering emotional experiences at a detailed, fine-grained level. While many researchers focus on recognizing a small number of basic emotion categories, humans experience a larger variety of distinct emotions. We aim to recognize as many as 20 emotion categories from the Geneva Emotion Wheel. Our goal is to study how to build such fine-grained emotion recognition systems. We start by surveying prior approaches to building emotion classifiers. The main body of this thesis studies two of them in detail: crowdsourcing and distant supervision. Based on them, we design fine-grained domain-specific systems to recognize users' reactions to sporting events captured on Twitter and address multiple challenges that arise in that process. Crowdsourcing allows extracting affective commonsense knowledge by asking hundreds of workers for manual annotation. The challenge is in collecting informative and truthful annotations. To address it, we design a human computation task that elicits both emotion category labels and emotion indicators (i.e. words or phrases indicative of labeled emotions). We also develop a methodology to build an emotion lexicon using such data. Our experiments show that the proposed crowdsourcing method can successfully generate a domain-specific emotion lexicon. Additionally, we suggest how to teach and motivate non-expert annotators. We show that including a tutorial and using carefully formulated reward descriptions can effectively improve annotation quality. Distant supervision consists of building emotion classifiers from data that are automatically labeled using some heuristics. This thesis studies heuristics that apply emotion lexicons of limited quality, for example due to missing or erroneous term-emotion associations. We show the viability of such an approach to obtain domain-specific classifiers having substantially better quality of recognition than the initial lexicon-based ones. Our experiments reveal that treating the emotion imbalance in training data and incorporating pseudo-neutral documents is crucial for such improvement. This method can be applied to building emotion classifiers across different domains using limited input resources and thus requiring minimal effort. Another challenge for lexicon-based emotion recognition is to reduce the error introduced by linguistic modifiers such as negation and modality. We design a data analysis method that allows modeling the specific effects of the studied modifiers, both in terms of shifting emotion categories and changing confidence in emotion presence. We show that the effects of modifiers vary across the emotion categories, which indicates the importance of treating such effects at a more fine-grained level to improve classification quality. Finally, the thesis concludes with our recommendations on how to address the examined general challenges of building a fine-grained textual emotion recognition system

    Text in Visualization: Extending the Visualization Design Space

    Get PDF
    This thesis is a systematic exploration and expansion of the design space of data visualization specifically with regards to text. A critical analysis of text in data visualizations reveals gaps in existing frameworks and the use of text in practice. A cross-disciplinary review across fields such as typography, cartography and technical applications yields typographic techniques to encode data into text and provides the scope for the expanded design space. Mapping new attributes, techniques and considerations back to well understood visualization principles organizes the design space of text in visualization. This design space includes: 1) text as a primary data type literally encoded into alphanumeric glyphs, 2) typographic attributes, such as bold and italic, capable of encoding additional data onto literal text, 3) scope of mark, ranging from individual glyphs, syllables and words; to sentences, paragraphs and documents, and 4) layout of these text elements applicable most known visualization techniques and text specific techniques such as tables. This is the primary contribution of this thesis (Part A and B). Then, this design space is used to facilitate the design, implementation and evaluation of new types of visualization techniques, ranging from enhancements of existing techniques, such as, extending scatterplots and graphs with literal marks, stem & leaf plots with multivariate glyphs and broader scope, and microtext line charts; to new visualization techniques, such as, multivariate typographic thematic maps; text formatted to facilitate skimming; and proportionally encoding quantitative values in running text – all of which are new contributions to the field (Part C). Finally, a broad evaluation across the framework and the sample visualizations with cross-discipline expert critiques and a metrics based approach reveals some concerns and many opportunities pointing towards a breadth of future research work now possible with this new framework. (Part D and E)
    corecore