8 research outputs found

    Language Independent Sentiment Analysis

    Full text link
    Social media platforms and online forums generate rapid and increasing amount of textual data. Businesses, government agencies, and media organizations seek to perform sentiment analysis on this rich text data. The results of these analytics are used for adapting marketing strategies, customizing products, security and various other decision makings. Sentiment analysis has been extensively studied and various methods have been developed for it with great success. These methods, however apply to texts written in a specific language. This limits applicability to a limited demographic and a specific geographic region. In this paper we propose a general approach for sentiment analysis on data containing texts from multiple languages. This enables all the applications to utilize the results of sentiment analysis in a language oblivious or language-independent fashion

    Text Classification in an Under-Resourced Language via Lexical Normalization and Feature Pooling

    Get PDF
    Automatic classification of textual content in an under-resourced language is challenging, since lexical resources and preprocessing tools are not available for such languages. Their bag-of-words (BoW) representation is usually highly sparse and noisy, and text classification built on such a representation yields poor performance. In this paper, we explore the effectiveness of lexical normalization of terms and statistical feature pooling for improving text classification in an under-resourced language. We focus on classifying citizen feedback on government services provided through SMS texts which are written predominantly in Roman Urdu (an informal forward transliterated version of the Urdu language). Our proposed methodology performs normalization of lexical variations of terms using phonetic and string similarity. It subsequently employs a supervised feature extraction technique to obtain category-specific highly discriminating features. Our experiments with classifiers reveal that significant improvement in classification performance is achieved by lexical normalization plus feature pooling over standard representations

    An Unsupervised Method for Discovering Lexical Variations in Roman Urdu Informal Text

    No full text
    We present an unsupervised method to find lexical variations in Roman Urdu informal text. Our method includes a phonetic algorithm UrduPhone, a feature-based similarity function, and a clustering algorithm Lex-C. UrduPhone encodes ro-man Urdu strings to their phonetic equiv-alent representations. This produces an initial grouping of different spelling vari-ations of a word. The similarity function incorporates word features and their con-text. Lex-C is a variant of k-medoids clus-tering algorithm that group lexical varia-tions. It incorporates a similarity thresh-old to balance the number of clusters and their maximum similarity. We test our sys-tem on two datasets of SMS and blogs and show an f-measure gain of up to 12 % from baseline systems.

    Learning Algorithm to Automate Fast Author Name Disambiguation

    Get PDF
    RÉSUMÉ : La production scientifique mondiale représente une quantité massive d’enregistrements auxquels on peut accéder via de nombreuses bases de données. En raison de la présence d’enregistrements ambigus, un processus de désambiguïsation efficace dans un délai raisonnable est nécessaire comme étape essentielle pour extraire l’information correcte et générer des statistiques de publication. Cependant, la tâche de désambiguïsation est exhaustive et complexe en raison des bases de données volumineuses et des données manquantes. Actuellement, il n’existe pas de méthode automatique complète capable de produire des résultats satisfaisants pour le processus de désambiguïsation. Auparavant, une application efficace de désambiguïsation d’entité a été développée, qui est un algorithme en cascade supervisé donnant des résultats prometteurs sur de grandes bases de données bibliographiques. Bien que le travail existant produise des résultats de haute qualité dans un délai de traitement raisonnable, il manque un choix efficace de métriques et la structure des classificateurs est déterminée d’une manière heuristique par l’analyse des erreurs de précision et de rappel. De toute évidence, une approche automatisée qui rend l’application flexible et réglable améliorerait directement la convivialité de l’application. Une telle approche permettrait de comprendre l’importance de chaque classification d’attributs dans le processus de désambiguïsation et de sélectionner celles qui sont les plus performantes. Dans cette recherche, nous proposons un algorithme d’apprentissage pour automatiser le processus de désambiguïsation de cette application. Pour atteindre nos objectifs, nous menons trois étapes majeures: premièrement, nous abordons le problème d’évaluation des algorithmes de codage phonétique qui peuvent être utilisés dans le blocking. Six algorithmes de codage phonétique couramment utilisés ont été sélectionnés et des mesures d’évaluation quantitative spécifiques ont été développées afin d’évaluer leurs limites et leurs avantages et de recruter le meilleur. Deuxièmement, nous testons différentes mesures de similarité de chaîne de caractères et nous analysons les avantages et les inconvénients de chaque technique. En d’autres termes, notre deuxième objectif est de construire une méthode de désambiguïsation efficace en comparant plusieurs algorithmes basés sur les edits et les tokens pour améliorer la méthode du blocking. Enfin, en utilisant les méthodes d’agrégation bootstrap (Bagging) et AdaBoost, un algorithme a été développé qui utilise des techniques d’optimisation de particle swarm et d’optimisation de set covers pour concevoir un cadre d’apprentissage qui permet l’ordre automatique des weak classifiers et la détermination de leurs seuils. Des comparaisons de performance ont été effectuées sur des données réelles extraites du Web of Science (WoS) et des bases de données bibliographiques SCOPUS. En résumé, ce travail nous permet de tirer des conclusions sur les qualités et les faiblesses de chaque algorithme phonétique et mesure de similarité dans la perspective de notre application. Nous avons montré que l’algorithme phonétique NYSIIS est un meilleur choix à utiliser dans l’étape de blocking de l’application de désambiguïsation. De plus, l’algorithme de Weighting Table-based surpassait certains des algorithmes de similarité couramment utilisés en terme de efficacité de temps, tout en produisant des résultats satisfaisants. En outre, nous avons proposé une méthode d’apprentissage pour déterminer automatiquement la structure de l’algorithme de désambiguïsation.----------ABSTRACT : The worldwide scientific production represents a massive amount of records which can be accessed via numerous databases. Because of the presence of ambiguous records, a time-efficient disambiguation process is required as an essential step of extracting correct information and generating publication statistics. However, the disambiguation task is exhaustive and complex due to the large volume databases and existing missing data. Currently there is no complete automatic method that is able to produce satisfactory results for the disambiguation process. Previously, an efficient entity disambiguation application was developed that is a supervised cascade algorithm which gives promising results on large bibliographic databases. Although the existing work produces high-quality results within a reasonable processing time, it lacks an efficient choice of metrics and the structure of the classifiers is determined in a heuristic manner by the analysis of precision and recall errors. Clearly, an automated approach that makes the application flexible and adjustable would directly enhance the usability of the application. Such approach would help to understand the importance of each feature classification in the disambiguation process and select the most efficient ones. In this research, we propose a learning algorithm for automating the disambiguation process of this application. In fact, the aim of this work is to help to employ the most appropriate phonetic algorithm and similarity measures as well as introduce a desirable automatic approach instead of a heuristic approach. To achieve our goals, we conduct three major steps: First, we address the problem of evaluating phonetic encoding algorithms that can be used in blocking. Six commonly used phonetic encoding algorithm were selected and specific quantitative evaluation metrics were developed in order to assess their limitations and advantages and recruit the best one. Second, we test different string similarity measures and we analyze the advantages and disadvantages of each technique. In other words, our second goal is to build an efficient disambiguation method by comparing several editand token-based algorithms to improve the blocking method. Finally, using bootstrap aggregating (Bagging) and AdaBoost methods, an algorithm has been developed that employs particle swarm and set cover optimization techniques to design a learning framework that enables automatic ordering of the weak classifiers and determining their thresholds. Performance comparisons were carried out on real data extracted from the web of science (WoS) and the SCOPUS bibliographic databases. In summary, this work allows us to draw conclusions about the qualities and weaknesses of each phonetic algorithm and similarity measure in the perspective of our application. We have shown that the NYSIIS phonetic algorithm is a better choice to use in blocking step of the disambiguation application. In addition, the Weighting Table-based algorithm outperforms some of the commonly used similarity algorithms in terms of time-efficiency, while producing satisfactory results. Moreover, we proposed a learning method to determine the structure of the disambiguation algorithm automatically

    Robust Neural Machine Translation

    Full text link
    This thesis aims for general robust Neural Machine Translation (NMT) that is agnostic to the test domain. NMT has achieved high quality on benchmarks with closed datasets such as WMT and NIST but can fail when the translation input contains noise due to, for example, mismatched domains or spelling errors. The standard solution is to apply domain adaptation or data augmentation to build a domain-dependent system. However, in real life, the input noise varies in a wide range of domains and types, which is unknown in the training phase. This thesis introduces five general approaches to improve NMT accuracy and robustness, where three of them are invariant to models, test domains, and noise types. First, we describe a novel unsupervised text normalization framework Lex-Var, to reduce the lexical variations for NMT. Then, we apply the phonetic encoding as auxiliary linguistic information and obtained very significant (5 BLEU point) improvement in translation quality and robustness. Furthermore, we introduce the random clustering encoding method based on our hypothesis of Semantic Diversity by Phonetics and generalizes to all languages. We also discussed two domain adaptation models for the known test domain. Finally, we provide a measurement of translation robustness based on the consistency of translation accuracy among samples and use it to evaluate our other methods. All these approaches are verified with extensive experiments across different languages and achieved significant and consistent improvements in translation quality and robustness over the state-of-the-art NMT
    corecore