11 research outputs found

    Explaining Sentiment Classification

    Get PDF
    This paper presents a novel 1-D sentiment classifier trained on the benchmark IMDB dataset. The classifier is a 1-D convolutional neural network with repeated convolution and max pooling layers. The main contribution of this work is the demonstration of a deconvolution technique for 1-D convolutional neural networks that is agnostic to specific architecture types. This deconvolution technique enables text classification to be explained, a feature that is important for NLP-based decision support systems, as well as being an invaluable diagnostic tool

    Explaining Sentiment Classification with Synthetic Exemplars and Counter-Exemplars

    Get PDF
    We present xspells, a model-agnostic local approach for explaining the decisions of a black box model for sentiment classification of short texts. The explanations provided consist of a set of exemplar sentences and a set of counter-exemplar sentences. The former are examples classified by the black box with the same label as the text to explain. The latter are examples classified with a different label (a form of counter-factuals). Both are close in meaning to the text to explain, and both are meaningful sentences – albeit they are synthetically generated. xspells generates neighbors of the text to explain in a latent space using Variational Autoencoders for encoding text and decoding latent instances. A decision tree is learned from randomly generated neighbors, and used to drive the selection of the exemplars and counter-exemplars. We report experiments on two datasets showing that xspells outperforms the well-known lime method in terms of quality of explanations, fidelity, and usefulness, and that is comparable to it in terms of stability

    Penerapan 1D-CNN untuk Analisis Sentimen Ulasan Produk Kosmetik Berdasar Female Daily Review

    Get PDF
    Pada tahun 2020 tercatat sekitar 797 industri kosmetik berskala besar maupun kecil yang terdapat di Indonesia. Berdasarkan tahun sebelumnya, angka ini naik 4.87%. Kondisi ini menyebabkan munculnya persaingan perusahaan kosmetik, salah satunya adalah Emina. Berbagai media digunakan sebagai sarana untuk menyampaikan sentimen atau opini masyarakat. Pihak perusahaan dapat memanfaatkan sentimen untuk mengetahui umpan balik masyarakat terhadap brand mereka. Website Female Daily Review menjadi salah satu platform yang digunakan untuk menampung segala bentuk opini mengenai produk kecantikan. Proses pengambilan data dari website pada penelitian ini menggunakan web scraping. Dari 11119 data ulasan yang didapatkan diperlukan analisis opini, emosi, dan sentimennya dengan memanfaatkan text mining untuk identifikasi serta mengekstrak suatu topik. Analisis sentimen dapat membantu mengetahui tingkat kepuasan pengguna terhadap suatu brand kosmetik. Algoritma yang digunakan adalah 1D-Convolutional Neural Network (1D-CNN). Sebelum dilakukan klasifikasi data, perlu diterapkan text preprocessing agar dataset mentah menjadi lebih terstruktur. Hasil dari klasifikasi sentimen  dibagi ke dalam 3 kategori yaitu positif, negatif, dan netral. Berdasarkan eksperimen dalam membangun model analisis sentimen menggunakan 1D-CNN sebanyak 30 percobaan, didapatkan model terbaik dalam menganalisis sentimen dengan akurasi sebesar 80.22%

    Benchmarking and survey of explanation methods for black box models

    Get PDF
    The rise of sophisticated black-box machine learning models in Artificial Intelligence systems has prompted the need for explanation methods that reveal how these models work in an understandable way to users and decision makers. Unsurprisingly, the state-of-the-art exhibits currently a plethora of explainers providing many different types of explanations. With the aim of providing a compass for researchers and practitioners, this paper proposes a categorization of explanation methods from the perspective of the type of explanation they return, also considering the different input data formats. The paper accounts for the most representative explainers to date, also discussing similarities and discrepancies of returned explanations through their visual appearance. A companion website to the paper is provided as a continuous update to new explainers as they appear. Moreover, a subset of the most robust and widely adopted explainers, are benchmarked with respect to a repertoire of quantitative metrics

    A Survey on Graph Counterfactual Explanations: Definitions, Methods, Evaluation

    Full text link
    In recent years, Graph Neural Networks have reported outstanding performance in tasks like community detection, molecule classification and link prediction. However, the black-box nature of these models prevents their application in domains like health and finance, where understanding the models' decisions is essential. Counterfactual Explanations (CE) provide these understandings through examples. Moreover, the literature on CE is flourishing with novel explanation methods which are tailored to graph learning. In this survey, we analyse the existing Graph Counterfactual Explanation methods, by providing the reader with an organisation of the literature according to a uniform formal notation for definitions, datasets, and metrics, thus, simplifying potential comparisons w.r.t to the method advantages and disadvantages. We discussed seven methods and sixteen synthetic and real datasets providing details on the possible generation strategies. We highlight the most common evaluation strategies and formalise nine of the metrics used in the literature. We first introduce the evaluation framework GRETEL and how it is possible to extend and use it while providing a further dimension of comparison encompassing reproducibility aspects. Finally, we provide a discussion on how counterfactual explanation interplays with privacy and fairness, before delving into open challenges and future works.Comment: arXiv admin note: text overlap with arXiv:2107.04086 by other author

    Interpreting natural language processing (NLP) models and lifting their limitations

    Get PDF
    There have been many advances in the artificial intelligence field due to the emergence of deep learning and big data. In almost all sub-fields, artificial neural networks have reached or exceeded human-level performance. However, most of the models are not interpretable and they perform like a black box. As a result, it is hard to trust their decisions, especially in life and death scenarios. In recent years, there has been a movement toward creating explainable artificial intelligence, but most work to date has concentrated on image processing models, as it is easier for humans to perceive visual patterns. There has been little work in other fields like natural language processing. By making our machine learning models more explainable and interpretable, we can learn about their logic, optimize them by removing bias, overcome their limitations, and make them resistant against adversarial attacks. This research dissertation is concentrated on making deep learning models that handle textual data, more understandable, and also use these insights in order to boost their performance by overcoming some of the common limitations. In addition to that, we use this knowledge to target words for designing efficient and effective textual adversarial attacks

    An emoji feature-incorporated multi-view deep learning for explainable sentiment classification of social media reviews

    Get PDF
    Sentiment analysis has demonstrated its value in a range of high-stakes domains. From financial markets to supply chain management, logistics, and technology legitimacy assessment, sentiment analysis offers insights into public sentiment, actionable data, and improved decision forecasting. This study contributes to this growing body of research by offering a novel multi-view deep learning approach to sentiment analysis that incorporates non-textual features like emojis. The proposed approach considers both textual and emoji views as distinct views of emotional information for the sentiment classification model, and the results acknowledge their individual and combined contributions to sentiment analysis. Comparative analysis with baseline classifiers reveals that incorporating emoji features significantly enriches sentiment analysis, enhancing the accuracy, F1-score, and execution time of the proposed model. Additionally, this study employs LIME for explainable sentiment analysis to provide insights into the model's decision-making process, enabling high-stakes businesses to understand the factors driving customer sentiment. The present study contributes to the literature on multi-view text classification in the context of social media and provides an innovative analytics method for businesses to extract valuable emotional information from electronic word of mouth (eWOM), which can help them stay ahead of the competition in a rapidly evolving digital landscape. In addition, the findings of this paper have important implications for policy development in digital communication and social media monitoring. Recognizing the importance of emojis in sentiment expression can inform policies by helping them better understand public sentiment and tailor policy solutions that better address the concerns of the public
    corecore