3,120 research outputs found

    The Italian Retranslations of Virginia Woolf's To the Lighthouse: A Corpus-based Literary Analysis

    Get PDF
    The research goal is to clarify how and to what degree the modernist style and features of Virginia Woolf’s To the Lighthouse are rendered in the eleven retranslations into Italian of this novel and whether these can be characterised as modernist novels themselves. A suitable methodology has been developed, which is drawn on the existing corpus methods for descriptive translation studies. Empirical evidence of the differences between target texts have been found, which in many cases have been interpreted as due to the translators’ voice or thumb-prints. The present research uses a systematic literary comparison of the retranslations by adopting a mixed-method and bottom-up (inductive) approach by developing an empirical corpus approach. This corpus is specifically tailored to identify and study both linguistic and non-linguistic modernist features throughout the texts such as stream of consciousness-indirect interior monologue and free indirect speech. All occurrences will be analysed in this thesis in the computations of inferential and comparative statistics such as lexical variety and lexical frequency. The target texts were digitised, and the resulting text files were then analysed by using a bespoke, novel computer program, which is capable of the mentioned functions not provided by commercially available software such as WordSmith Tools and WMatrix. Not only did this methodology enable performing in-depth explorations of micro- and macro-textual features, but it also allowed a mixed-method approach combining close-reading qualitative analysis with systematic quantitative comparisons. The obtained empirical results identify a progressive source-text orientation of the retranslations of Woolf’s style in a few aspects of a few target texts. The translators’ presence affected all the eleven target texts in register and style under the influence of the Italian translation norms usually attributed to the translation of literary classics

    TTS pre-processing issues for mixed language support

    Get PDF
    The design of an open domain Text-to-Speech (TTS) system mandates a preprocessing module that is responsible for preparing the input text so that it can be handled in a standard fashion by other TTS components. There are several pre-processing issues that are common across input domains, such as number, date and acronym handling. Others may be specific to the type of input under examination. In designing a TTS system supporting Maltese for SMS messages in the local context, specific issues are also encountered. In terms of language issues, the practical use of Maltese is heavily characterised by code-switching into other languages, most commonly in English. While the resulting language may not be considered ‘Maltese’ in the strictest sense of the language definition, it creates a state of affairs that cannot simply be ignored by a general purpose system. In respect of the SMS domain, the use of various shorthand notation and lack of phrase structure is encountered. This paper describes these specific issues in further detail and discusses techniques with which they may be addressed.peer-reviewe

    Universal Design for Learning in K-12 Educational Settings: A Review of Group Comparison and Single-subject Intervention Studies

    Get PDF
    This literature review on Universal Design for Learning (UDL) included articles from January 1984 through June 2014. We (a) investigated the UDL educational framework without the inclusion of other major K-12 educational frameworks in learning environments, (b) reported researchers’ scope and depth of use of the UDL principles, and (c) focused our investigation on two research methods: group comparison and single-subject. We used the quality indicators for evidence-based practices (EBPs) in special education to review, not rate, the final pool of five peer-reviewed articles. Results included analyses of the incorporation of UDL principles in all identified studies, highlighting the need for caution in promoting conceptual frameworks until sufficient empirical evidence is available to validate pedagogical utility in educational environments. We conclude that the UDL framework has merit but researchers must conduct studies that use group comparison and single-subject studies to independently test the UDL principles, guidelines, and checkpoints to increase the likelihood of causation in treatment outcomes

    Speech Synthesis Based on Hidden Markov Models

    Get PDF

    Comparing timing models of two Swiss German dialects

    Get PDF
    Research on dialectal varieties was for a long time concentrated on phonetic aspects of language. While there was a lot of work done on segmental aspects, suprasegmentals remained unexploited until the last few years, despite the fact that prosody was remarked as a salient aspect of dialectal variants by linguists and by naive speakers. Actual research on dialectal prosody in the German speaking area often deals with discourse analytic methods, correlating intonations curves with communicative functions (P. Auer et al. 2000, P. Gilles & R. Schrambke 2000, R. Kehrein & S. Rabanus 2001). The project I present here has another focus. It looks at general prosodic aspects, abstracted from actual situations. These global structures are modelled and integrated in a speech synthesis system. Today, mostly intonation is being investigated. However, rhythm, the temporal organisation of speech, is not a core of actual research on prosody. But there is evidence that temporal organisation is one of the main structuring elements of speech (B. Zellner 1998, B. Zellner Keller 2002). Following this approach developed for speech synthesis, I will present the modelling of the timing of two Swiss German dialects (Bernese and Zurich dialect) that are considered quite different on the prosodic level. These models are part of the project on the "development of basic knowledge for research on Swiss German prosody by means of speech synthesis modelling" founded by the Swiss National Science Foundation

    Humanising Text-to-Speech Through Emotional Expression in Online Courses

    Get PDF
    This paper outlines an innovative approach to evaluate the emotional content of three online courses using the affective computing approach of prosody detection on two different text-to-speech (TTS) voices in conjunction with human raters judging the emotional content of the text. This work intends to establish the potential variation on the emotional delivery of online educational resources through the use of synthetic voice, which automatically articulates text into audio. Preliminary results from this pilot research suggest that about one out of every three sentences (35%) in a MOOC contained emotional text and two existing assistive technology voices had poor emotional alignment when reading this text. Synthetic voices were more likely to be overly negative when considering their expression as compared to the emotional content of the text they are reading, which was most frequently neutral. We also analyzed a synthetic voice for which we configured the emotional expression to align with course text, which showed promising improvements

    CULTURAL TRANSFER IN THE TRANSLATIONS OF MEDIA ORGANIZATION WEBSITES: A DESCRIPTIVE ANALYSIS OF ARTICLES AND THEIR TURKISH TRANSLATIONS ON THE BBC WEBSITE

    Get PDF
    The websites of media organizations address to readers from many different languages and cultures. Each culture has its own specific values, habits and norms. Translators employ some translation strategies in order to transfer these culture specific items (hereinafter; CSIs) from a source text (hereinafter; ST) to a target text (hereinafter; TT). They are supposed to establish translations that are completely comprehensible for the target readers. In this study, the articles of the British Broadcast Company (hereinafter; the BBC) that are translated by BBC Turkish Service translator and published in the link ‘Dergi’ are analysed based on Toury’s translational norms and Aixela’s classification for CSIs. It is designed with a qualitative method and supported with an interview to triangulate the data. The findings show that the translations are generally ‘acceptable’, that is, the translator has the tendency towards target culture according to Toury’ s translational norms. She mostly employs constitution strategies of Aixela to transfer CSIs and this indicates the general tendency of the translations to ‘be a representation of a source text’. However, the translator specifies the target readers of the link as ‘educated young population’ and this does not complicate the comprehensibility of the CSIs by the target readers.Keywords: Translation Studies, Translational Norms of Toury, Cultural Transfer, Culture Specific Items, Aixela, BBC Turkish Service

    An Interview with Miriam Schcolnik: Reading, E-Reading and Writing and Their Assessment

    Get PDF
    Dr. Miriam Schcolnik (emerita) is the former Director of the Language Learning Center of the Division of Foreign Languages at Tel Aviv University. For three decades she coordinated and taught EAP (English for Academic Purposes) courses as well as a course in Technology in Language Teaching. She has developed many online learning environments, multimedia courseware packages, EFL textbooks, and teachers' resource books. Her research interests are e-reading and writing, and the use of digital tools to facilitate language learning and communication

    Investigating the translation of Islamic terms into English in an Indonesian context

    Get PDF
    This thesis investigates some key translation issues arising from the translation of Islamic terms in the academic abstract of an Islamic text from Indonesian into English. Using the frameworks of translation as intercultural communication across languages and cultures as well as systemic functional linguistics, this project focuses on four topics such as translation quality, translation strategies and techniques, linguistic, and cultural considerations. A mixed-methods research design was used in the project, and 90 respondents participated. Quantitative data analysis showed that the translation quality was determined by the experiential meaning of the Islamic terms and that the quality of Islamic term translations did not differ significantly among the three translator groups. The study discovered that the translation quality of the Islamic term groups was considered as moderate. This indicates that the lexical choices of Islamic phrases more frequently reflect their proper experiential meanings, even though certain words are difficult to understand. This also suggested that, despite a few ungrammatical structural patterns, Islamic word groupings were appropriately expressed in appropriate experiential structures. Furthermore, lexical choices in transliterations, as dominated by STs, may result in a more dense and complicated text. STs and TTs' inclusion of more lengthy terms' explanations may also impair the abstract layout and thus distract the readers. In terms of structural patterns, CTs were more likely to retain the original text structure than STs and TTs. Qualitative data showed that foreignisation was the most preferred translation strategy, while pure borrowing and correspondence were the common techniques used in translating Islamic terms. The reasons why particular strategy and technique were used referred to general practice, reader orientation, text categories, and personal reasons. Thing to Deictic Thing became the most common Islamic term experiential construction used in the target text. In addition, of 80 target experiential constructions identified in this study, it was revealed that STs dominated the initial 20 suggested constructions while the new 60 versions were regulated by TTs. While most translators shifted the experiential structures appropriately, a few functional roles were discovered to have shifted improperly, which caused changes in their experiential meanings. The translated Islamic terms were culture-specific to Islamic religion but with Indonesian transliteration style. As a result, the translation rarely found their cultural equivalents where a few irrelevant cultural replacements were also infrequently identified. Being aware of the importance of culture helped Indonesian translators recognise these terms from socio-cultural information

    Introducing nativization to Spanish TTS systems

    Full text link
    In the modern world, speech technologies must be flexible and adaptable to any framework. Mass media globalization introduces multilingualism as a challenge for the most popular speech applications such as text-to-speech synthesis and automatic speech recognition. Mixed-language texts vary in their nature and when processed, some essential characteristics must be considered. In Spain and other Spanish-speaking countries, the use of Anglicisms and other words of foreign origin is constantly growing. A particularity of peninsular Spanish is that there is a tendency to nativize the pronunciation of non-Spanish words so that they fit properly into Spanish phonetic patterns. In our previous work, we proposed to use hand-crafted nativization tables that were capable of nativizing correctly 24% of words from the test data. In this work, our goal was to approach the nativization challenge by data-driven methods, because they are transferable to other languages and do not drop in performance in comparison with explicit rules manually written by experts. Training and test corpora for nativization consisted of 1000 and 100 words respectively and were crafted manually. Different specifications of nativization by analogy and learning from errors focused on finding the best nativized pronunciation of foreign words. The best obtained objective nativization results showed an improvement from 24% to 64% in word accuracy in comparison to our previous work. Furthermore, a subjective evaluation of the synthesized speech allowed for the conclusion that nativization by analogy is clearly the preferred method among listeners of different backgrounds when comparing to previously proposed methods. These results were quite encouraging and proved that even a small training corpus is sufficient for achieving significant improvements in naturalness for English inclusions of variable length in Spanish utterances.Peer ReviewedPostprint (published version
    corecore