26,570 research outputs found

    Emotion Detection for Afaan Oromo Using Deep Learning

    Get PDF
    Emotion detection in text has become more popular due to its various useful applications in a different area, such as tracking product perception, public opinion detection about political tendencies, stock market monitoring, text summarization, information extraction, recommendation system, question answering and etc. However, manually identifying emotion of a million of people and aggregating them towards a rapid and efficient decision is quite a challenging task due to the rapid growth of social media user.  This study aimed to develop Afaan Oromo emotion detection model in order to tackle this challenge.  This study adopts artificial neural network approach. We used python tools with Keras library. We conduct our experiments on five emotion class (anger(arii), love(jaalala), joy(gamachu), disgust(jibba), and sadness(gadda)) by collecting a total of 1005 emotional sentence of Afaan Oromo language that have been manually annotated. The sentence has been scraped from different official Facebook page such as Oromia Broadcasting Network (OBN) pages, Fana Broadcasting Corporation (FBC) Afaan Oromo page, and British Broadcasting Corporation (BBC) Afaan Oromo pages using Facepager tools by creating Facebook API id. After collecting these data all preprocessing steps like tokenization, stop word removal and normalization have been undertaken. We used word embedding’s for feature extraction of preprocessed data. Subsequently, we have applied three artificial neural network algorithms such as Feed forward neural network, long short-term Memory and Bidirectional long short-term memory for classification purpose of the vectorized sentence into their emotion class. We compared the three artificial neural network algorithms and found out that Bidirectional long short-term memory achieved the best performance. We have achieved an average accuracy of 66%, 78%, 83% using Feed Forward Neural Network, Long Short-Term Memory and Bidirectional Long Short-Term Memory respectively. Based on experimental result, the researcher concluded that increasing amount of dataset, tuning hyper parameters properly and trying by different algorithms can, in some case, improve the performance of the model. Keywords: Emotion Identification, Afaan Oromo, Artificial Neural Network, Social Media DOI: 10.7176/NMMC/92-01 Publication date:August 31st 202

    Graph Convolutional Neural Networks for Web-Scale Recommender Systems

    Full text link
    Recent advancements in deep neural networks for graph-structured data have led to state-of-the-art performance on recommender system benchmarks. However, making these methods practical and scalable to web-scale recommendation tasks with billions of items and hundreds of millions of users remains a challenge. Here we describe a large-scale deep recommendation engine that we developed and deployed at Pinterest. We develop a data-efficient Graph Convolutional Network (GCN) algorithm PinSage, which combines efficient random walks and graph convolutions to generate embeddings of nodes (i.e., items) that incorporate both graph structure as well as node feature information. Compared to prior GCN approaches, we develop a novel method based on highly efficient random walks to structure the convolutions and design a novel training strategy that relies on harder-and-harder training examples to improve robustness and convergence of the model. We also develop an efficient MapReduce model inference algorithm to generate embeddings using a trained model. We deploy PinSage at Pinterest and train it on 7.5 billion examples on a graph with 3 billion nodes representing pins and boards, and 18 billion edges. According to offline metrics, user studies and A/B tests, PinSage generates higher-quality recommendations than comparable deep learning and graph-based alternatives. To our knowledge, this is the largest application of deep graph embeddings to date and paves the way for a new generation of web-scale recommender systems based on graph convolutional architectures.Comment: KDD 201

    Multimodal Content Analysis for Effective Advertisements on YouTube

    Full text link
    The rapid advances in e-commerce and Web 2.0 technologies have greatly increased the impact of commercial advertisements on the general public. As a key enabling technology, a multitude of recommender systems exists which analyzes user features and browsing patterns to recommend appealing advertisements to users. In this work, we seek to study the characteristics or attributes that characterize an effective advertisement and recommend a useful set of features to aid the designing and production processes of commercial advertisements. We analyze the temporal patterns from multimedia content of advertisement videos including auditory, visual and textual components, and study their individual roles and synergies in the success of an advertisement. The objective of this work is then to measure the effectiveness of an advertisement, and to recommend a useful set of features to advertisement designers to make it more successful and approachable to users. Our proposed framework employs the signal processing technique of cross modality feature learning where data streams from different components are employed to train separate neural network models and are then fused together to learn a shared representation. Subsequently, a neural network model trained on this joint feature embedding representation is utilized as a classifier to predict advertisement effectiveness. We validate our approach using subjective ratings from a dedicated user study, the sentiment strength of online viewer comments, and a viewer opinion metric of the ratio of the Likes and Views received by each advertisement from an online platform.Comment: 11 pages, 5 figures, ICDM 201

    Attentive Aspect Modeling for Review-aware Recommendation

    Full text link
    In recent years, many studies extract aspects from user reviews and integrate them with ratings for improving the recommendation performance. The common aspects mentioned in a user's reviews and a product's reviews indicate indirect connections between the user and product. However, these aspect-based methods suffer from two problems. First, the common aspects are usually very sparse, which is caused by the sparsity of user-product interactions and the diversity of individual users' vocabularies. Second, a user's interests on aspects could be different with respect to different products, which are usually assumed to be static in existing methods. In this paper, we propose an Attentive Aspect-based Recommendation Model (AARM) to tackle these challenges. For the first problem, to enrich the aspect connections between user and product, besides common aspects, AARM also models the interactions between synonymous and similar aspects. For the second problem, a neural attention network which simultaneously considers user, product and aspect information is constructed to capture a user's attention towards aspects when examining different products. Extensive quantitative and qualitative experiments show that AARM can effectively alleviate the two aforementioned problems and significantly outperforms several state-of-the-art recommendation methods on top-N recommendation task.Comment: Camera-ready manuscript for TOI
    corecore