466,801 research outputs found

    AN ANALYSIS OF THE TRANSLATION OF TWITTER HELPING CENTER WEBSITE WWW.SUPPORT.TWITTER.COM

    Get PDF
    ABSTRACT Hani Iswuri. C0312032. An Analysis of the Translation of Twitter Helping Center Website www.support.twitter.com. Thesis. English Department. Faculty of Cultural Sciences. Sebelas Maret University. This research focuses on analysis of translation techniques applied in translating Twitter Helping Center website and the quality of translation in terms of accuracy, acceptability and readability. The purposes of this research are: (1) to identify the translation techniques applied by the translator in translating the expressions found in Twitter Helping Center website; and (2) to find out the impact of the use of translation techniques towards the quality of the translation in terms of accuracy, acceptability, and readability. This research was descriptive-qualitative research by using purposive sampling technique. The data are the expressions found in the website of Twitter Helping Center www.support.twitter.com. The data source of this research is the the website of Twitter Helping Center www.support.twitter.com. The total data found in the website are 100 data in the form of expressions; words and sentences which then analyzed per the smallest unit using the techniques proposed by Molina&Albir and resulting to the total of 426 data. The analysis on the translation technique shows that there are 9 single techniques and 12 multiple techniques applied by the translator in translating the website of Twitter Helping Center. The single techniques are: Established Equivalent (290 data), Borrowing (61 data), Variation (18 data), Amplification (11 data), Modulation (8 data), Discursive Creation (5 data), Transposition (4 data), Reduction (2 data) and Description (1 data). While the multiple techniques are: Amplification + Established Equivalent (6 data), Reduction + Established Equivalent (6 data), Borrowing + Established Equivalent (3 data), Modulation + Established Equivalent (2 data), Modulation + Borrowing (2 data), Borrowing + Amplification (1 data), Borrowing + Reduction (1 data), Transposition + Borrowing (1 data), Modulation + Transposition (1 data), Reduction + Borrowing (1 data), Modulation + Borrowing + Established Equivalent (1 data), Modulation + Established Equivalent + Discursive Creation (1 data) The research findings indicate that the techniques which produce the high level of accuracy in translation of Twitter Helping Center website are established equivalent and borrowing technique. Meanwhile, the techniques which produce the low level of accuracy in translation of Twitter Helping Center website are reduction and discursive creation. In terms of acceptability, the use of established equivalent technique gives positive effect. It happens because the translator uses equivalent terms in Indonesian which are familiar and commonly used by the readers. Meanwhile, the technique which gives negative effect to the level of acceptability of the data is borrowing technique. Regarding to readability, the technique which produce high level of readability is established equivalent, while the technique which produce low level of readability in this research are borrowing and discursive creation. Keywords: Twitter, localization, website translation, translation technique, translation quality, accuracy, acceptability, readabilit

    Data Editing for Neuro-Fuzzy Classifiers

    Get PDF
    In this paper we investigate the potential benefits and limitations of various data editing procedures when constructing neuro-fuzzy classifiers based on hyperbox fuzzy sets. There are two major aspects of data editing which we are attempting to exploit: a) removal of outliers and noisy data; and b) reduction of training data size. We show that successful training data editing can result in constructing simpler classifiers (i.e. a classifier with a smaller number and larger hyperboxes) with better generalisation performance. However we also indicate the potential dangers of overediting which can lead to dropping the whole regions of a class and constructing too simple classifiers not able to capture the class boundaries with high enough accuracy. A more flexible approach than the existing data editing techniques based on estimating probabilities used to decide whether a point should be removed from the training set has been proposed. An analysis and graphical interpretations are given for the synthetic, non-trivial, 2-dimensional classification problems

    Automated design of robust discriminant analysis classifier for foot pressure lesions using kinematic data

    Get PDF
    In the recent years, the use of motion tracking systems for acquisition of functional biomechanical gait data, has received increasing interest due to the richness and accuracy of the measured kinematic information. However, costs frequently restrict the number of subjects employed, and this makes the dimensionality of the collected data far higher than the available samples. This paper applies discriminant analysis algorithms to the classification of patients with different types of foot lesions, in order to establish an association between foot motion and lesion formation. With primary attention to small sample size situations, we compare different types of Bayesian classifiers and evaluate their performance with various dimensionality reduction techniques for feature extraction, as well as search methods for selection of raw kinematic variables. Finally, we propose a novel integrated method which fine-tunes the classifier parameters and selects the most relevant kinematic variables simultaneously. Performance comparisons are using robust resampling techniques such as Bootstrap632+632+and k-fold cross-validation. Results from experimentations with lesion subjects suffering from pathological plantar hyperkeratosis, show that the proposed method can lead tosim96sim 96%correct classification rates with less than 10% of the original features

    Supervised projection pursuit - A dimensionality reduction technique optimized for probabilistic classification

    Get PDF
    An important step in multivariate analysis is the dimensionality reduction, which allows for a better classification and easier visualization of the class structures in the data. Techniques like PCA, PLS-DA and LDA are most often used to explore the patterns in the data and to reduce the dimensions. Yet the data does not always reveal properly the structures when these techniques are applied. To this end, a supervised projection pursuit (SuPP) is proposed in this article, based on Jensen-Shannon divergence. The combination of this metric with powerful Monte Carlo based optimization algorithm, yielded a versatile dimensionality reduction technique capable of working with highly dimensional data and missing observations. Combined with Naïve Bayes (NB) classifier, SuPP proved to be a powerful preprocessing tool for classification. Namely, on the Iris data set, the prediction accuracy of SuPP-NB is significantly higher than the prediction accuracy of PCA-NB, (p-value ≤ 4.02E-05 in a 2D latent space, p-value ≤ 3.00E-03 in a 3D latent space) and significantly higher than the prediction accuracy of PLS-DA (p-value ≤ 1.17E-05 in a 2D latent space and p-value ≤ 3.08E-03 in a 3D latent space). The significantly higher accuracy for this particular data set is a strong evidence of a better class separation in the latent spaces obtained with SuPP
    corecore