29 research outputs found

    Political Text Scaling Meets Computational Semantics

    Full text link
    During the last fifteen years, automatic text scaling has become one of the key tools of the Text as Data community in political science. Prominent text scaling algorithms, however, rely on the assumption that latent positions can be captured just by leveraging the information about word frequencies in documents under study. We challenge this traditional view and present a new, semantically aware text scaling algorithm, SemScale, which combines recent developments in the area of computational linguistics with unsupervised graph-based clustering. We conduct an extensive quantitative analysis over a collection of speeches from the European Parliament in five different languages and from two different legislative terms, and show that a scaling approach relying on semantic document representations is often better at capturing known underlying political dimensions than the established frequency-based (i.e., symbolic) scaling method. We further validate our findings through a series of experiments focused on text preprocessing and feature selection, document representation, scaling of party manifestos, and a supervised extension of our algorithm. To catalyze further research on this new branch of text scaling methods, we release a Python implementation of SemScale with all included data sets and evaluation procedures.Comment: Updated version - accepted for Transactions on Data Science (TDS

    Unsupervised Cross-Lingual Information Retrieval Using Monolingual Data Only

    Get PDF
    We propose a fully unsupervised framework for ad-hoc cross-lingual information retrieval (CLIR) which requires no bilingual data at all. The framework leverages shared cross-lingual word embedding spaces in which terms, queries, and documents can be represented, irrespective of their actual language. The shared embedding spaces are induced solely on the basis of monolingual corpora in two languages through an iterative process based on adversarial neural networks. Our experiments on the standard CLEF CLIR collections for three language pairs of varying degrees of language similarity (English-Dutch/Italian/Finnish) demonstrate the usefulness of the proposed fully unsupervised approach. Our CLIR models with unsupervised cross-lingual embeddings outperform baselines that utilize cross-lingual embeddings induced relying on word-level and document-level alignments. We then demonstrate that further improvements can be achieved by unsupervised ensemble CLIR models. We believe that the proposed framework is the first step towards development of effective CLIR models for language pairs and domains where parallel data are scarce or non-existent

    SenZi: A Sentiment Analysis Lexicon for the Latinised Arabic (Arabizi)

    Get PDF
    Arabizi is an informal written form of dialectal Arabic transcribed in Latin alphanumeric characters. It has a proven popularity on chat platforms and social media, yet it suffers from a severe lack of natural language processing (NLP) resources. As such, texts written in Arabizi are often disregarded in sentiment analysis tasks for Arabic. In this paper we describe the creation of a sentiment lexicon for Arabizi that was enriched with word embeddings. The result is a new Arabizi lexicon consisting of 11.3K positive and 13.3K negative words. We evaluated this lexicon by classifying the sentiment of Arabizi tweets achieving an F1-score of 0.72. We provide a detailed error analysis to present the challenges that impact the sentiment analysis of Arabizi

    Sustainable Organic Corn Production with the Use of Flame Weeding as the Most Sustainable Economical Solution

    Get PDF
    Flame weeding is an alternative method of weed control. Essentially, it is a supplement to other physical and mechanical processes used in organic production. Weed control costs have a large share of the total cost of crop production. This study aimed to investigate hand weed hoeing's cost-effectiveness, accompanied by inter-row cultivation and flame weeding applied in organic maize production using two different machines to determine the economically best solution. For this purpose, the prototype flame weeder and commercial flame-weeding machinery were used. Designed primarily for smaller fields, the prototype flame weeder was equipped with a cultivator and a 70 kg propane bottle. Commercial Red Dragon flame weeder, fitted with an 800 kg propane tank and featuring no cultivation implements, is designed for larger areas. The analysis has shown that hand hoeing produced a higher yield (8.3 t/ha in total), but it contributed significantly to the production costs. The costs per hectare decreased when the prototype flame weeder and the commercial Red Dragon flame weeder were used compared to hand hoeing. More beneficial economic impacts were recorded when the prototype flame weeder was used (489.39 euro/ha) than in applying the Red Dragon flame weeder (456.47 euro/ha). The efficacy of flame weeding is somewhat limited and could be enhanced by additional hand hoeing, if the effect of the machine in terms of weeding is observed. However, the analysis has shown that, in this case, investments in additional hand hoeing are not economically justified because the operating costs incurred therein (168 euro/ha) were not met by a yield increase of 500 kg/ha, i.e., a surplus revenue of 100 euro/ha. Moreover, the economic impacts of flame weeding would be considerably more significant in larger fields

    Findings from the Hackathon on Understanding Euroscepticism Through the Lens of Textual Data

    Get PDF
    We present an overview and the results of a shared-task hackathon that took place as part of a research seminar bringing together a variety of experts and young researchers from the fields of political science, natural language processing and computational social science. The task looked at ways to develop novel methods for political text scaling to better quantify political party positions on European integration and Euroscepticism from the transcript of speeches of three legislations of the European Parliament

    XHate-999: Analyzing and Detecting Abusive Language Across Domains and Languages

    Get PDF
    We present XHate -999, a multi-domain and multilingual evaluation data set for abusive language detection. By aligning test instances across six typologically diverse languages, XHate-999 for the first time allows for disentanglement of the domain transfer and language transfer effects in abusive language detection. We conduct a series of domain- and language-transfer experiments with state-of-the-art monolingual and multilingual transformer models, setting strong baseline results and profiling XH ATE -999 as a comprehensive evaluation resource for abusive language detection. Finally, we show that domain- and language-adaptation, via intermediate masked language modeling on abusive corpora in the target language, can lead to substantially improved abusive language detection in the target language in the zero-shot transfer setups

    Towards Instance-Level Parser Selection for Cross-Lingual Transfer of Dependency Parsers

    Get PDF
    Current methods of cross-lingual parser transfer focus on predicting the best parser for a low-resource target language globally, that is, "at treebank level”. In this work, we propose and argue for a novel cross-lingual transfer paradigm: instance-level parser selection (ILPS), and present a proof-of-concept study focused on instance-level selection in the framework of delexicalized parser transfer. Our work is motivated by an empirical observation that different source parsers are the best choice for different Universal POS-sequences (i.e., UPOS sentences) in the target language. We then propose to predict the best parser at the instance level. To this end, we train a supervised regression model, based on the Transformer architecture, to predict parser accuracies for individual POS-sequences. We compare ILPS against two strong single-best parser selection baselines (SBPS): (1) a model that compares POS n-gram distributions between the source and target languages (KL) and (2) a model that selects the source based on the similarity between manually created language vectors encoding syntactic properties of languages (L2V). The results from our extensive evaluation, coupling 42 source parsers and 20 diverse low-resource test languages, show that ILPS outperforms KL and L2V on 13/20 and 14/20 test languages, respectively. Further, we show that by predicting the best parser “at treebank level” (SBPS), using the aggregation of predictions from our instance-level model, we outperform the same baselines on 17/20 and 16/20 test languages
    corecore