8 research outputs found

    Hybrid context enriched deep learning model for fine-grained sentiment analysis in textual and visual semiotic modality social data

    Get PDF
    Detecting sentiments in natural language is tricky even for humans, making its automated detection more complicated. This research proffers a hybrid deep learning model for fine-grained sentiment prediction in real-time multimodal data. It reinforces the strengths of deep learning nets in combination to machine learning to deal with two specific semiotic systems, namely the textual (written text) and visual (still images) and their combination within the online content using decision level multimodal fusion. The proposed contextual ConvNet-SVMBoVW model, has four modules, namely, the discretization, text analytics, image analytics, and decision module. The input to the model is multimodal text, m ε {text, image, info-graphic}. The discretization module uses Google Lens to separate the text from the image, which is then processed as discrete entities and sent to the respective text analytics and image analytics modules. Text analytics module determines the sentiment using a hybrid of a convolution neural network (ConvNet) enriched with the contextual semantics of SentiCircle. An aggregation scheme is introduced to compute the hybrid polarity. A support vector machine (SVM) classifier trained using bag-of-visual-words (BoVW) for predicting the visual content sentiment. A Boolean decision module with a logical OR operation is augmented to the architecture which validates and categorizes the output on the basis of five fine-grained sentiment categories (truth values), namely ‘highly positive,’ ‘positive,’ ‘neutral,’ ‘negative’ and ‘highly negative.’ The accuracy achieved by the proposed model is nearly 91% which is an improvement over the accuracy obtained by the text and image modules individually

    The Role of Preprocessing for Word Representation Learning in Affective Tasks

    Get PDF
    Affective tasks, including sentiment analysis, emotion classification, and sarcasm detection have drawn a lot of attention in recent years due to a broad range of useful applications in various domains. The main goal of affect detection tasks is to recognize states such as mood, sentiment, and emotions from textual data (e.g., news articles or product reviews). Despite the importance of utilizing preprocessing steps in different stages (i.e., word representation learning and building a classification model) of affect detection tasks, this topic has not been studied well. To that end, we explore whether applying various preprocessing methods (stemming, lemmatization, stopword removal, punctuation removal and so on) and their combinations in different stages of the affect detection pipeline can improve the model performance. The are many preprocessing approaches that can be utilized in affect detection tasks. However, their influence on the final performance depends on the type of preprocessing and the stages that they are applied. Moreover, the preprocessing impacts vary across different affective tasks. Our analysis provides thorough insights into how preprocessing steps can be applied in building an effect detection pipeline and their respective influence on performance

    Phonetic normalization as a means to improve toxicity detection

    Get PDF
    À travers le temps et en présence des avancements de la technologie, l'utilisation de cette technologie afin de créer et de maintenir des communautés en ligne est devenue une occurrence journalière. Avec l'augmentation de l'utilisation de ces technologies, une tendance négative peut aussi se faire identifier; il y a une quantité croissante d'utilisateurs ayant des objectifs négatifs qui créent du contenu illicite ou nuisible à ces communautés. Afin de protéger ces communautés, il devient donc nécessaire de modérer les communications des communautés. Bien qu'il serait possible d'engager une équipe de modérateurs, cette équipe devrait constamment grandir afin de pouvoir modérer l'entièreté du contenu. Afin de résoudre ce problème, plusieurs se tournent vers des techniques de modération automatique. Deux exemples de techniques sont les "whitelists" et les "blacklists". Malheureusement, les utilisateurs néfastes peuvent facilement contourner ces techniques à l'aide de techniques subversives. Une des techniques populaires est l'utilisation de substitution où un utilisateur remplace un mot par un équivalent phonétique, ou une combinaison visuellement semblable au mot original. À travers ce mémoire, nous offrons une nouvelle technique de normalisation faisant usage de la phonétique à l'intérieur d'un normalisateur de texte. Ce normalisateur recrée la prononciation et infère le mot réel à partir de cette normalisation, l'objectif étant de retirer les signes de subversion. Une fois normalisé, un message peut ensuite être passé aux systèmes de classification.Over time, the presence of online communities and the use of electronic means of communication have and keep becoming more prevalent. With this increase, the presence of users making use of those means to spread and create harmful, or sometimes known as toxic, content has also increased. In order to protect those communities, the need for moderation becomes a critical matter. While it could be possible to hire a team of moderators, this team would have to be ever-growing, and as such, most turn to automatic means of detection as a step in their moderation process. Examples of such automatic means would be the use of methods such as blacklists and whitelists, but those methods can easily be subverted by harmful users. A common subversion technique is the substitution of a complete word by a phonetically similar word, or combination of letters that resembles the intended word. This thesis aims to offer a novel approach to moderation specifically targeting phonetic substitutions by creating a normalizer capable of identifying how a word should be read and inferring the obfuscated word, nullifying the effects of subversion. Once normalized phonetically, the messages are then sent to existing means of classification for automatic moderation

    Phonetic-based microtext normalization for Twitter sentiment analysis

    No full text
    The proliferation of Web 2.0 technologies and the increasing use of computer-mediated communication resulted in a new form of written text, termed microtext. This poses new challenges to natural language processing tools which are usually designed for well-written text. This paper proposes a phonetic-based framework for normalizing microtext to plain English and, hence, improve the classification accuracy of sentiment analysis. Results demonstrated that there is a high (>0.8) similarity index between tweets normalized by our model and tweets normalized by human annotators in 85.31% of cases, and that there is an accuracy increase of >4% in terms of polarity detection after normalization
    corecore