3,796,664 research outputs found

    Conflict, Claim and Contradiction in the New Indigenous State of Bolivia

    Get PDF
    Recent conflict between indigenous people and a self-styled indigenous state in Bolivia has brought to the fore some of the paradoxes and contradictions within the concept of indigeneity itself. The contemporary politics of state sponsored indigeneity in Bolivia has as much capacity to create new inequalities as it does to address old ones and there is a conceptual deficit in understanding contemporary indigenous rights claims, in particular, as they relate to the state. I reject Peter Geschiere?s (2009) suggestion that one should distinguish between ?autochthony? and ?indigeneity? but am inspired by these arguments to suggest that one needs to make a critical distinction between the kinds of claims different indigenous people make against the state. Of interest here are the consequences of indigeneity being transformed from being a language of resistance to a language of governance. I propose a conceptual distinction between inclusive national indigeneity for the majority which seeks to co-opt the state through accessing the language of governance and a minority concept of indigeneity which needs protection from the state and continues to use indigeneity as a language of resistance. Only by looking at the kinds of claims people make through the rhetoric of indigeneity can we make sense of the current indigenous conflict in Bolivia and elsewhere

    ParsBERT: Transformer-based Model for Persian Language Understanding

    Full text link
    The surge of pre-trained language models has begun a new era in the field of Natural Language Processing (NLP) by allowing us to build powerful language models. Among these models, Transformer-based models such as BERT have become increasingly popular due to their state-of-the-art performance. However, these models are usually focused on English, leaving other languages to multilingual models with limited resources. This paper proposes a monolingual BERT for the Persian language (ParsBERT), which shows its state-of-the-art performance compared to other architectures and multilingual models. Also, since the amount of data available for NLP tasks in Persian is very restricted, a massive dataset for different NLP tasks as well as pre-training the model is composed. ParsBERT obtains higher scores in all datasets, including existing ones as well as composed ones and improves the state-of-the-art performance by outperforming both multilingual BERT and other prior works in Sentiment Analysis, Text Classification and Named Entity Recognition tasks.Comment: 10 pages, 5 figures, 7 tables, table 7 corrected and some refs related to table

    An implementation of Apertium based Assamese morphological analyzer

    Full text link
    Morphological Analysis is an important branch of linguistics for any Natural Language Processing Technology. Morphology studies the word structure and formation of word of a language. In current scenario of NLP research, morphological analysis techniques have become more popular day by day. For processing any language, morphology of the word should be first analyzed. Assamese language contains very complex morphological structure. In our work we have used Apertium based Finite-State-Transducers for developing morphological analyzer for Assamese Language with some limited domain and we get 72.7% accurac

    Modular Composition of Language Features through Extensions of Semantic Language Models

    Get PDF
    Today, programming or specification languages are often extended in order to customize them for a particular application domain or to refine the language definition. The extension of a semantic model is often at the centre of such an extension. We will present a framework for linking basic and extended models. The example which we are going to use is the RSL concurrency model. The RAISE specification language RSL is a formal wide-spectrum specification language which integrates different features, such as state-basedness, concurrency and modules. The concurrency features of RSL are based on a refinement of a classical denotational model for process algebras. A modification was necessary to integrate state-based features into the basic model in order to meet requirements in the design of RSL. We will investigate this integration, formalising the relationship between the basic model and the adapted version in a rigorous way. The result will be a modular composition of the basic process model and new language features, such as state-based features or input/output. We will show general mechanisms for integration of new features into a language by extending language models in a structured, modular way. In particular, we will concentrate on the preservation of properties of the basic model in these extensions

    Neural Natural Language Inference Models Enhanced with External Knowledge

    Full text link
    Modeling natural language inference is a very challenging task. With the availability of large annotated data, it has recently become feasible to train complex models such as neural-network-based inference models, which have shown to achieve the state-of-the-art performance. Although there exist relatively large annotated data, can machines learn all knowledge needed to perform natural language inference (NLI) from these data? If not, how can neural-network-based NLI models benefit from external knowledge and how to build NLI models to leverage it? In this paper, we enrich the state-of-the-art neural natural language inference models with external knowledge. We demonstrate that the proposed models improve neural NLI models to achieve the state-of-the-art performance on the SNLI and MultiNLI datasets.Comment: Accepted by ACL 201
    corecore