11,281 research outputs found

    Modulation in English Into Indonesia Translation

    Full text link
    This descriptive-qualitative research investigated the modulation phenomena and measured the accuracy level of the phenomena occurring in the translated text. It involved 40 students at the fifth semester of English education program in STAIN Jurai Siwo Metro. The data were taken from each participant‟s translated texts using observation and documentation. The kinds of text are scientific and literary texts. Both texts contain 20 modulation phenomena. All of the phenomena were analyzed by comparing the proper translation text with the translated text produced by the students and then measured in the term of accuracy level. The researcher found 11 Fixed Modulation (FM) and 9 Optional Modulation (OM). In the term of accuracy, the FM is more accurate than OM but OM has a lower inaccurate level than FM has. Related to the accuracy percentage of all phenomena, there are 16% categorized as accurate, 39% as less accurate, and 45% as inaccurate. In conclusion, the students are still lack of the accuracy in modulation so that they should be able to let themselves free from influential factors of the source language structure and to express natural and equivalent translation in target language

    Error-tolerant Finite State Recognition with Applications to Morphological Analysis and Spelling Correction

    Get PDF
    Error-tolerant recognition enables the recognition of strings that deviate mildly from any string in the regular set recognized by the underlying finite state recognizer. Such recognition has applications in error-tolerant morphological processing, spelling correction, and approximate string matching in information retrieval. After a description of the concepts and algorithms involved, we give examples from two applications: In the context of morphological analysis, error-tolerant recognition allows misspelled input word forms to be corrected, and morphologically analyzed concurrently. We present an application of this to error-tolerant analysis of agglutinative morphology of Turkish words. The algorithm can be applied to morphological analysis of any language whose morphology is fully captured by a single (and possibly very large) finite state transducer, regardless of the word formation processes and morphographemic phenomena involved. In the context of spelling correction, error-tolerant recognition can be used to enumerate correct candidate forms from a given misspelled string within a certain edit distance. Again, it can be applied to any language with a word list comprising all inflected forms, or whose morphology is fully described by a finite state transducer. We present experimental results for spelling correction for a number of languages. These results indicate that such recognition works very efficiently for candidate generation in spelling correction for many European languages such as English, Dutch, French, German, Italian (and others) with very large word lists of root and inflected forms (some containing well over 200,000 forms), generating all candidate solutions within 10 to 45 milliseconds (with edit distance 1) on a SparcStation 10/41. For spelling correction in Turkish, error-tolerantComment: Replaces 9504031. gzipped, uuencoded postscript file. To appear in Computational Linguistics Volume 22 No:1, 1996, Also available as ftp://ftp.cs.bilkent.edu.tr/pub/ko/clpaper9512.ps.

    Context-sensitive Spelling Correction Using Google Web 1T 5-Gram Information

    Full text link
    In computing, spell checking is the process of detecting and sometimes providing spelling suggestions for incorrectly spelled words in a text. Basically, a spell checker is a computer program that uses a dictionary of words to perform spell checking. The bigger the dictionary is, the higher is the error detection rate. The fact that spell checkers are based on regular dictionaries, they suffer from data sparseness problem as they cannot capture large vocabulary of words including proper names, domain-specific terms, technical jargons, special acronyms, and terminologies. As a result, they exhibit low error detection rate and often fail to catch major errors in the text. This paper proposes a new context-sensitive spelling correction method for detecting and correcting non-word and real-word errors in digital text documents. The approach hinges around data statistics from Google Web 1T 5-gram data set which consists of a big volume of n-gram word sequences, extracted from the World Wide Web. Fundamentally, the proposed method comprises an error detector that detects misspellings, a candidate spellings generator based on a character 2-gram model that generates correction suggestions, and an error corrector that performs contextual error correction. Experiments conducted on a set of text documents from different domains and containing misspellings, showed an outstanding spelling error correction rate and a drastic reduction of both non-word and real-word errors. In a further study, the proposed algorithm is to be parallelized so as to lower the computational cost of the error detection and correction processes.Comment: LACSC - Lebanese Association for Computational Sciences - http://www.lacsc.or

    New Horizons in Translation Research and Education 1

    Get PDF

    Spelling Errors in English Writing Committed by English-Major Students at BAU

    Get PDF
    Error Analysis is an essential part of linguistic analysis that sheds light on errors committed by second language learners. This study aims at investigating spelling errors committed by English-Major students at BAU. The participants in the present study were 65 students. The participants’ essays in "technical writing" course were used to be the data of the study. Next, data were analyzed based on Cook’s classification of spelling errors. The results of the study show four types of spelling errors, substitution errors, insertion errors, omission errors and transposition errors. In addition, results indicate that the difference between English and Arabic writing system is one of the major causes for students’ errors. Results are hopefully useful information in Error Analysis studies and other related areas. Keywords: Writing, Omission errors, Substitution errors, Insertion errors, Transposition error

    AN ANALYSIS OF LEARNER LANGUAGE IN INDONESIAN-ENGLISH TRANSLATION OF ENGLISH EDUCATIONAL STUDY PROGRAM STUDENTS OF UNIVERSITAS NAHDLATUL ULAMA LAMPUNG

    Get PDF
    Translation becomes so important since it is the process of replacing the source language into the target language without replacing the intended meaning. The learners usually bring their previous competence of language on performing the second language. The communication process either spoken or written is the way to interpret the other people’s language even in different culture or language. Thus, the purpose of the source language in the text delivered accurately.This research was aimed at describing learner languages phenomena related to the five procedures of translation then showing the precentage of learner language in translation procedures from Indonesian to English.The data collecting methods used interview and documentation. The data was gathered from the students’ result of translation 2 semester test at Universitas Nahdlatul Ulama Lampung. The research was conducted toward thirty three of the sixth semester students of English Educational Study Program of Universitas Nahdlatul Ulama Lampung .the result of the research showed that most of the students’ learner languages and errors were found in translation procedures, the highest percentage of learner language in translation procedure in Indonesian-english translation was transposition, and the students didn’t understand about translation procedures

    Contract-Based General-Purpose GPU Programming

    Get PDF
    Using GPUs as general-purpose processors has revolutionized parallel computing by offering, for a large and growing set of algorithms, massive data-parallelization on desktop machines. An obstacle to widespread adoption, however, is the difficulty of programming them and the low-level control of the hardware required to achieve good performance. This paper suggests a programming library, SafeGPU, that aims at striking a balance between programmer productivity and performance, by making GPU data-parallel operations accessible from within a classical object-oriented programming language. The solution is integrated with the design-by-contract approach, which increases confidence in functional program correctness by embedding executable program specifications into the program text. We show that our library leads to modular and maintainable code that is accessible to GPGPU non-experts, while providing performance that is comparable with hand-written CUDA code. Furthermore, runtime contract checking turns out to be feasible, as the contracts can be executed on the GPU

    Simultaneous Interpretation of Numbers: Comparing German and English to Italian. An Experimental Study

    Get PDF
    An experimental study was carried out to investigate whether the difficulty of delivering numbers in SI is language-independent or whether some specific features – such as the different structures of the numerical systems in SL and TL – may also be relevant and influence SI performance negatively. To this end, a German text and an English text, both dense with numbers, were interpreted simultaneously into Italian by 16 students. The first language pair (ENIT) had a linear numerical system and the second one (DE-IT) did not, as in German the so-called inversion rule has to be applied. An initial analysis of the results suggested that the difficulty of delivering numbers in SI is language-independent.However, amore detailed analysis of the outcomes showed that a significant difference between the two language pairswas apparent in the distribution and typology of errors: transposition/position errors (including inversion errors) were evident in German but not in English
    • …
    corecore