18 research outputs found

    Joint Learning of Pre-Trained and Random Units for Domain Adaptation in Part-of-Speech Tagging

    Full text link
    Fine-tuning neural networks is widely used to transfer valuable knowledge from high-resource to low-resource domains. In a standard fine-tuning scheme, source and target problems are trained using the same architecture. Although capable of adapting to new domains, pre-trained units struggle with learning uncommon target-specific patterns. In this paper, we propose to augment the target-network with normalised, weighted and randomly initialised units that beget a better adaptation while maintaining the valuable source knowledge. Our experiments on POS tagging of social media texts (Tweets domain) demonstrate that our method achieves state-of-the-art performances on 3 commonly used datasets

    A comparative study of different features for efficient automatic hate speech detection

    Get PDF
    International audienceCommonly, Hate Speech (HS) is defined as any communication that disparages a person or agroup on the basis of some characteristic (race, colour, ethnicity, gender, sexual orientation, na-tionality, etc. (Nockeby, 2000)). Due to the massive activities of user-generator on social networks(around 500 million tweets per day) Hate Speech is continuously increasing on the web.Recent initiatives, such as SemEval2019 shared task 5 Hateval2019 (Basile et al., 2019) contri-bute to the development of automatic hate speech detection systems (HSD) by making availableannotated hateful corpus. We focus our research on automatic classification of hateful tweets,which are the first sub-task of Hateval2019. The best Hateval2019 HSD system was FERMI (In-durthi et al., 2019) with 65.1 % macro-F1 score on the test corpus. This system used sentenceembeddings, Universal Sentence Encoder (USE) (Cer et al., 2018) as input of a Support VectorMachine classifier.In this article, we study the impact of different features on an HSD system. We use deep neu-ral network (DNN) based classifier with USE. We investigate the word level features, such aslexicon of hateful words (HFW), Part of Speech (POS), uppercase letters (UP), punctuationmarks (PUNCT), the ratio of the number of times a word appears in hateful tweets comparedto the total number of times that word appears (RatioHW) ; and the emojis (EMO). We think thatthese features are relevant because they carry feelings. For instance, cases (UP) and punctuations(PUNCT) can carry the intonation of the tweets and can be used to express a hateful content. ForHFW features, we tag each word of tweets as hateful or not using the Hatebase lexicon (Hate-base.org) and we associate a binary value to each word. For POS features, we use twpipe (Liu etal., 2018) for tagging the words and this information is coded as an one-hot vector. For emojis,we generate an embedding vector using emoji2vec tools (Eisner et al., 2016). The input of ourneural network consists of the USE vector and our additional features. We used convolutionalneural networks (CNN) as binary classifier. We performed the experiments on the HateEval2019corpus to study the influence of each proposed feature. Our baseline system without proposedfeatures achieves 65.7% of macro-F1 score on the test corpus. Surprisingly, HFW degrades thesystem performance and decreases the macro-F1 by 14 points compared to the baseline. Thiscan be due to the fact that some words are hateful only in a particular context. UP, RatioHWand PUNCT slightly degrade the baseline system. The POS features do not change the baselinesystem result and so are probably not correlated to the hate speech. The best result is obtainedusing EMO features with 66.0% of macro-F1. EMOs are largely used to transmit emotions. Inour system,they are modeled by a specific embedding vector. USE does not take into account theemojis. Therefore, EMOs give additional information to USE about the hateful content of tweets

    Treebanking user-generated content: A proposal for a unified representation in universal dependencies

    Get PDF
    The paper presents a discussion on the main linguistic phenomena of user-generated texts found in web and social media, and proposes a set of annotation guidelines for their treatment within the Universal Dependencies (UD) framework. Given on the one hand the increasing number of treebanks featuring user-generated content, and its somewhat inconsistent treatment in these resources on the other, the aim of this paper is twofold: (1) to provide a short, though comprehensive, overview of such treebanks - based on available literature - along with their main features and a comparative analysis of their annotation criteria, and (2) to propose a set of tentative UD-based annotation guidelines, to promote consistent treatment of the particular phenomena found in these types of texts. The main goal of this paper is to provide a common framework for those teams interested in developing similar resources in UD, thus enabling cross-linguistic consistency, which is a principle that has always been in the spirit of UD

    From general language understanding to noisy text comprehension

    Get PDF
    Obtaining meaning-rich representations of social media inputs, such as Tweets (unstructured and noisy text), from general-purpose pre-trained language models has become challenging, as these inputs typically deviate from mainstream English usage. The proposed research establishes effective methods for improving the comprehension of noisy texts. For this, we propose a new generic methodology to derive a diverse set of sentence vectors combining and extracting various linguistic characteristics from latent representations of multi-layer, pre-trained language models. Further, we clearly establish how BERT, a state-of-the-art pre-trained language model, comprehends the linguistic attributes of Tweets to identify appropriate sentence representations. Five new probing tasks are developed for Tweets, which can serve as benchmark probing tasks to study noisy text comprehension. Experiments are carried out for classification accuracy by deriving the sentence vectors from GloVe-based pre-trained models and Sentence-BERT, and by using different hidden layers from the BERT model. We show that the initial and middle layers of BERT have better capability for capturing the key linguistic characteristics of noisy texts than its latter layers. With complex predictive models, we further show that the sentence vector length has lesser importance to capture linguistic information, and the proposed sentence vectors for noisy texts perform better than the existing state-of-the-art sentence vectors. © 2021 by the authors. Licensee MDPI, Basel, Switzerland
    corecore