33 research outputs found

    A Sentiment Analysis Dataset for Code-Mixed Malayalam-English

    Get PDF
    There is an increasing demand for sentiment analysis of text from social media which are mostly code-mixed. Systems trained on monolingual data fail for code-mixed data due to the complexity of mixing at different levels of the text. However, very few resources are available for code-mixed data to create models specific for this data. Although much research in multilingual and cross-lingual sentiment analysis has used semi-supervised or unsupervised methods, supervised methods still performs better. Only a few datasets for popular languages such as English-Spanish, English-Hindi, and English-Chinese are available. There are no resources available for Malayalam-English code-mixed data. This paper presents a new gold standard corpus for sentiment analysis of code-mixed text in Malayalam-English annotated by voluntary annotators. This gold standard corpus obtained a Krippendorff's alpha above 0.8 for the dataset. We use this new corpus to provide the benchmark for sentiment analysis in Malayalam-English code-mixed texts

    A Graph Auto-encoder Model of Derivational Morphology

    Get PDF
    There has been little work on modeling the morphological well-formedness (MWF) of derivatives, a problem judged to be complex and difficult in linguistics (Bauer, 2019). We present a graph auto-encoder that learns em- beddings capturing information about the com- patibility of affixes and stems in derivation. The auto-encoder models MWF in English sur- prisingly well by combining syntactic and se- mantic information with associative informa- tion from the mental lexicon

    Tune your brown clustering, please

    Get PDF
    Brown clustering, an unsupervised hierarchical clustering technique based on ngram mutual information, has proven useful in many NLP applications. However, most uses of Brown clustering employ the same default configuration; the appropriateness of this configuration has gone predominantly unexplored. Accordingly, we present information for practitioners on the behaviour of Brown clustering in order to assist hyper-parametre tuning, in the form of a theoretical model of Brown clustering utility. This model is then evaluated empirically in two sequence labelling tasks over two text types. We explore the dynamic between the input corpus size, chosen number of classes, and quality of the resulting clusters, which has an impact for any approach using Brown clustering. In every scenario that we examine, our results reveal that the values most commonly used for the clustering are sub-optimal

    Chomskyan (R)evolutions

    Get PDF
    It is not unusual for contemporary linguists to claim that “Modern Linguistics began in 1957” (with the publication of Noam Chomsky’s Syntactic Structures). Some of the essays in Chomskyan (R)evolutions examine the sources, the nature and the extent of the theoretical changes Chomsky introduced in the 1950s. Other contributions explore the key concepts and disciplinary alliances that have evolved considerably over the past sixty years, such as the meanings given for “Universal Grammar”, the relationship of Chomskyan linguistics to other disciplines (Cognitive Science, Psychology, Evolutionary Biology), and the interactions between mainstream Chomskyan linguistics and other linguistic theories active in the late 20th century: Functionalism, Generative Semantics and Relational Grammar. The broad understanding of the recent history of linguistics points the way towards new directions and methods that linguistics can pursue in the future

    Proceedings of the 42nd Australian Linguistic Society Conference - 2011

    Get PDF
    ANU College of Arts & Social Sciences, School of Language Studies; ANU College of Asia and the Pacific, School of Culture, History and Languag
    corecore