505 research outputs found

    On the Role of Text Preprocessing in Neural Network Architectures: An Evaluation Study on Text Categorization and Sentiment Analysis

    Full text link
    Text preprocessing is often the first step in the pipeline of a Natural Language Processing (NLP) system, with potential impact in its final performance. Despite its importance, text preprocessing has not received much attention in the deep learning literature. In this paper we investigate the impact of simple text preprocessing decisions (particularly tokenizing, lemmatizing, lowercasing and multiword grouping) on the performance of a standard neural text classifier. We perform an extensive evaluation on standard benchmarks from text categorization and sentiment analysis. While our experiments show that a simple tokenization of input text is generally adequate, they also highlight significant degrees of variability across preprocessing techniques. This reveals the importance of paying attention to this usually-overlooked step in the pipeline, particularly when comparing different models. Finally, our evaluation provides insights into the best preprocessing practices for training word embeddings.Comment: Blackbox EMNLP 2018. 7 page

    Sarcasm Detection on Text for Political Domain— An Explainable Approach

    Get PDF
    In the era of social media, a large volume of data is generated by applications such as the industrial internet of things, IoT, Facebook, Twitter, and individual usage. Artificial intelligence and big data tools plays an important role in devising mechanisms for handling this vast volume of data as per the required usage of data to form important information from this unstructured data. When the data is publicly available on the internet and social media, it is imperative to treat the data carefully to respect the sentiments of the individuals. In this paper, the authors have attempted to solve three problems for treating the data using AI and data science tools, weighted statistical methods, and explainability of sarcastic comments. The first objective of this research study is sarcasm detection, and the next objective is to apply it to a domain-specific political Reddit dataset. Moreover, the last is to predict sarcastic words using counterfactual explainability. The textare extracted from the self-annotated Reddit corpus dataset containing 533 million comments written in English language, where 1.3 million comments are sarcastic. The sarcasm detection based model uses a weighted average approach and deep learning models to extract information and provide the required output in terms of content classification. Identifying sarcasm from a sentence is very challenging when the sentence has content that flips the polarity of positive sentiment into negative sentiment. This cumbersome task can be achieved with artificial intelligenceand machine learningalgorithms that train the machine and assist in classifying the required content from the sentences to keep the social media posts acceptable to society. There should be a mechanism to determine the extent to which the model's prediction could be relied upon. Therefore, the explination of the prediction is essential. We studied the methods and developed a model for detecting sarcasm and explaining the prediction. Therefore, the sarcasm detection model with explainability assists in identifying the sarcasmfrom the reddit post and its sentiment score to classify given textcorrectly. The F1-score of 75.75% for sarcasm and 80% for the explainability model proves the robustness of the proposed model

    From text saliency to linguistic objects: learning linguistic interpretable markers with a multi-channels convolutional architecture

    Get PDF
    A lot of effort is currently made to provide methods to analyze and understand deep neural network impressive performances for tasks such as image or text classification. These methods are mainly based on visualizing the important input features taken into account by the network to build a decision. However these techniques, let us cite LIME, SHAP, Grad-CAM, or TDS, require extra effort to interpret the visualization with respect to expert knowledge. In this paper, we propose a novel approach to inspect the hidden layers of a fitted CNN in order to extract interpretable linguistic objects from texts exploiting classification process. In particular, we detail a weighted extension of the Text Deconvolution Saliency (wTDS) measure which can be used to highlight the relevant features used by the CNN to perform the classification task. We empirically demonstrate the efficiency of our approach on corpora from two different languages: English and French. On all datasets, wTDS automatically encodes complex linguistic objects based on co-occurrences and possibly on grammatical and syntax analysis.Comment: 7 pages, 22 figure

    Achieving Hate Speech Detection in a Low Resource Setting

    Get PDF
    Online social networks provide people with convenient platforms to communicate and share life moments. However, because of the anonymous property of these social media platforms, the cases of online hate speeches are increasing. Hate speech is defined by the Cambridge Dictionary as “public speech that expresses hate or encourages violence towards a person or group based on something such as race, religion, sex, or sexual orientation”. Online hate speech has caused serious negative effects to legitimate users, including mental or emotional stress, reputational damage, and fear for one’s safety. To protect legitimate online users, automatically hate speech detection techniques are deployed on various social media. However, most of the existing hate speech detection models require a large amount of labeled data for training. In the thesis, we focus on achieving hate speech detection without using many labeled samples. In particular, we focus on three scenarios of hate speech detection and propose three corresponding approaches. (i) When we only have limited labeled data for one social media platform, we fine-tune a per-trained language model to conduct hate speech detection on the specific platform. (ii) When we have data from several social media platforms, each of which only has a small size of labeled data, we develop a multitask learning model to detect hate speech on several platforms in parallel. (iii) When we aim to conduct hate speech on a new social media platform, where we do not have any labeled data for this platform, we propose to use domain adaptation to transfer knowledge from some other related social media platforms to conduct hate speech detection on the new platform. Empirical studies show that our proposed approaches can achieve good performance on hate speech detection in a low resource setting

    Sentiment Analysis of News Tweets

    Get PDF
    Sentiment Analysis is a process of extracting information from a large amount of data and classifying them into different classes called sentiments. Python is a simple yet powerful, high-level, interpreted, and dynamic programming language, which is well known for its functionality of processing natural language data by using NLTK (Natural Language Toolkit). NLTK is a library of python, which provides a base for building programs and classification of data. NLTK also provides a graphical demonstration for representing various results or trends and it also provides sample data to train and test various classifiers respectively. Sentiment classification aims to automatically predict the sentiment polarity of users publishing sentiment data. Although traditional classification algorithms can be used to train sentiment classifiers from manually labeled text data, the labeling work can be time-consuming and expensive. Meanwhile, users often use different words when they express sentiment in different domains. If we directly apply a classifier trained in one domain to other domains, the performance will be very low due to the difference between these domains. In this work, we develop a general solution to sentiment classification when we do not have any labels in the target domain but have some labeled data in a different domain, regarded as the source domain. The purpose of this study is to analyze the tweets of the popular local and international news agencies and classify the tweeted news as positive, negative, or neutral categories

    Is text preprocessing still worth the time? A comparative survey on the influence of popular preprocessing methods on Transformers and traditional classifiers

    Get PDF
    With the advent of the modern pre-trained Transformers, the text preprocessing has started to be neglected and not specifically addressed in recent NLP literature. However, both from a linguistic and from a computer science point of view, we believe that even when using modern Transformers, text preprocessing can significantly impact on the performance of a classification model. We want to investigate and compare, through this study, how preprocessing impacts on the Text Classification (TC) performance of modern and traditional classification models. We report and discuss the preprocessing techniques found in the literature and their most recent variants or applications to address TC tasks in different domains. In order to assess how much the preprocessing affects classification performance, we apply the three top referenced preprocessing techniques (alone or in combination) to four publicly available datasets from different domains. Then, nine machine learning models – including modern Transformers – get the preprocessed text as input. The results presented show that an educated choice on the text preprocessing strategy to employ should be based on the task as well as on the model considered. Outcomes in this survey show that choosing the best preprocessing technique – in place of the worst – can significantly improve accuracy on the classification (up to 25%, as in the case of an XLNet on the IMDB dataset). In some cases, by means of a suitable preprocessing strategy, even a simple Naïve Bayes classifier proved to outperform (i.e., by 2% in accuracy) the best performing Transformer. We found that Transformers and traditional models exhibit a higher impact of the preprocessing on the TC performance. Our main findings are: (1) also on modern pre-trained language models, preprocessing can affect performance, depending on the datasets and on the preprocessing technique or combination of techniques used, (2) in some cases, using a proper preprocessing strategy, simple models can outperform Transformers on TC tasks, (3) similar classes of models exhibit similar level of sensitivity to text preprocessing
    • …
    corecore