661 research outputs found

    Knowledge will Propel Machine Understanding of Content: Extrapolating from Current Examples

    Full text link
    Machine Learning has been a big success story during the AI resurgence. One particular stand out success relates to learning from a massive amount of data. In spite of early assertions of the unreasonable effectiveness of data, there is increasing recognition for utilizing knowledge whenever it is available or can be created purposefully. In this paper, we discuss the indispensable role of knowledge for deeper understanding of content where (i) large amounts of training data are unavailable, (ii) the objects to be recognized are complex, (e.g., implicit entities and highly subjective content), and (iii) applications need to use complementary or related data in multiple modalities/media. What brings us to the cusp of rapid progress is our ability to (a) create relevant and reliable knowledge and (b) carefully exploit knowledge to enhance ML/NLP techniques. Using diverse examples, we seek to foretell unprecedented progress in our ability for deeper understanding and exploitation of multimodal data and continued incorporation of knowledge in learning techniques.Comment: Pre-print of the paper accepted at 2017 IEEE/WIC/ACM International Conference on Web Intelligence (WI). arXiv admin note: substantial text overlap with arXiv:1610.0770

    Multimodal Emotion Classification

    Get PDF
    Most NLP and Computer Vision tasks are limited to scarcity of labelled data. In social media emotion classification and other related tasks, hashtags have been used as indicators to label data. With the rapid increase in emoji usage of social media, emojis are used as an additional feature for major social NLP tasks. However, this is less explored in case of multimedia posts on social media where posts are composed of both image and text. At the same time, w.e have seen a surge in the interest to incorporate domain knowledge to improve machine understanding of text. In this paper, we investigate whether domain knowledge for emoji can improve the accuracy of emotion classification task. We exploit the importance of different modalities from social media post for emotion classification task using state-of-the-art deep learning architectures. Our experiments demonstrate that the three modalities (text, emoji and images) encode different information to express emotion and therefore can complement each other. Our results also demonstrate that emoji sense depends on the textual context, and emoji combined with text encodes better information than considered separately. The highest accuracy of 71.98\% is achieved with a training data of 550k posts.Comment: Accepted at the 2nd Emoji Workshop co-located with The Web Conference 201

    Emoji’s sentiment score estimation using convolutional neural network with multi-scale emoji images

    Get PDF
    Emojis are any small images, symbols, or icons that are used in social media. Several well-known emojis have been ranked and sentiment scores have been assigned to them. These ranked emojis can be used for sentiment analysis; however, many new released emojis have not been ranked and have no sentiment score yet. This paper proposes a new method to estimate the sentiment score of any unranked emotion emoji from its image by classifying it into the class of the most similar ranked emoji and then estimating the sentiment score using the score of the most similar emoji. The accuracy of sentiment score estimation is improved by using multi-scale images. The ranked emoji image data set consisted of 613 classes with 161 emoji images from three different platforms in each class. The images were cropped to produce multi-scale images. The classification and estimation were performed by using convolutional neural network (CNN) with multi-scale emoji images and the proposed voting algorithm called the majority voting with probability (MVP). The proposed method was evaluated on two datasets: ranked emoji images and unranked emoji images. The accuracies of sentiment score estimation for the ranked and unranked emoji test images are 98% and 51%, respectively
    corecore