478 research outputs found

    Opinion mining and sentiment analysis in marketing communications: a science mapping analysis in Web of Science (1998–2018)

    Get PDF
    Opinion mining and sentiment analysis has become ubiquitous in our society, with applications in online searching, computer vision, image understanding, artificial intelligence and marketing communications (MarCom). Within this context, opinion mining and sentiment analysis in marketing communications (OMSAMC) has a strong role in the development of the field by allowing us to understand whether people are satisfied or dissatisfied with our service or product in order to subsequently analyze the strengths and weaknesses of those consumer experiences. To the best of our knowledge, there is no science mapping analysis covering the research about opinion mining and sentiment analysis in the MarCom ecosystem. In this study, we perform a science mapping analysis on the OMSAMC research, in order to provide an overview of the scientific work during the last two decades in this interdisciplinary area and to show trends that could be the basis for future developments in the field. This study was carried out using VOSviewer, CitNetExplorer and InCites based on results from Web of Science (WoS). The results of this analysis show the evolution of the field, by highlighting the most notable authors, institutions, keywords, publications, countries, categories and journals.The research was funded by Programa Operativo FEDER Andalucía 2014‐2020, grant number “La reputación de las organizaciones en una sociedad digital. Elaboración de una Plataforma Inteligente para la Localización, Identificación y Clasificación de Influenciadores en los Medios Sociales Digitales (UMA18‐ FEDERJA‐148)” and The APC was funded by the same research gran

    On sample efficiency and systematic generalization of grounded language understanding with deep learning

    Full text link
    En utilisant la méthodologie de l'apprentissage profond qui préconise de s'appuyer davantage sur des données et des modèles neuronaux flexibles plutôt que sur les connaissances de l'expert dans le domaine, la communauté de recherche a récemment réalisé des progrès remarquables dans la compréhension et la génération du langue naturel. Néanmoins, il reste difficile de savoir si une simple extension des méthodes d'apprentissage profond existantes sera suffisante pour atteindre l'objectif d'utiliser le langage naturel pour l'interaction homme-machine. Nous nous concentrons sur deux aspects connexes dans lesquels les méthodes actuelles semblent nécessiter des améliorations majeures. Le premier de ces aspects est l'inefficacité statistique des systèmes d'apprentissage profond: ils sont connus pour nécessiter de grandes quantités de données pour bien fonctionner. Le deuxième aspect est leur capacité limitée à généraliser systématiquement, à savoir à comprendre le langage dans des situations où la distribution des données change mais les principes de syntaxe et de sémantique restent les mêmes. Dans cette thèse, nous présentons quatre études de cas dans lesquelles nous cherchons à apporter plus de clarté concernant l'efficacité statistique susmentionnée et les aspects de généralisation systématique des approches d'apprentissage profond de la compréhension des langues, ainsi qu'à faciliter la poursuite des travaux sur ces sujets. Afin de séparer le problème de la représentation des connaissances du monde réel du problème de l'apprentissage d'une langue, nous menons toutes ces études en utilisant des langages synthétiques ancrés dans des environnements visuels simples. Dans le premier article, nous étudions comment former les agents à suivre des instructions compositionnelles dans des environnements avec une forme de supervision restreinte. À savoir pour chaque instruction et configuration initiale de l'environnement, nous ne fournissons qu'un état cible au lieu d'une trajectoire complète avec des actions à toutes les étapes. Nous adaptons les méthodes d'apprentissage adversariel par imitation à ce paramètre et démontrons qu'une telle forme restreinte de données est suffisante pour apprendre les significations compositionelles des instructions. Notre deuxième article se concentre également sur des agents qui apprennent à exécuter des instructions. Nous développons la plateforme BabyAI pour faciliter des études plus approfondies et plus rigoureuses de ce cadre d'apprentissage. La plateforme fournit une langue BabyAI compositionnelle avec 101910 ^ {19} instructions, dont la sémantique est précisément définie dans un environnement partiellement observable. Nous rapportons des résultats de référence sur la quantité de supervision nécessaire pour enseigner à l'agent certains sous-ensembles de la langue BabyAI avec différentes méthodes de formation, telles que l'apprentissage par renforcement et l'apprentissage par imitation. Dans le troisième article, nous étudions la généralisation systématique des modèles de réponse visuelle aux questions (VQA). Dans le scénario VQA, le système doit répondre aux questions compositionelles sur les images. Nous construisons un ensemble de données de questions spatiales sur les paires d'objets et évaluons la performance des différents modèles sur les questions concernant les paires d'objets qui ne se sont jamais produites dans la même question dans la distribution d'entraînement. Nous montrons que les modèles dans lesquels les significations des mots sont représentés par des modules séparés qui effectuent des calculs indépendants généralisent beaucoup mieux que les modèles dont la conception n'est pas explicitement modulaire. Cependant, les modèles modulaires ne généralisent bien que lorsque les modules sont connectés dans une disposition appropriée, et nos expériences mettent en évidence les défis de l'apprentissage de la disposition par un apprentissage de bout en bout sur la distribution d'entraînement. Dans notre quatrième et dernier article, nous étudions également la généralisation des modèles VQA à des questions en dehors de la distribution d'entraînement, mais cette fois en utilisant le jeu de données CLEVR, utilisé pour les questions complexes sur des scènes rendues en 3D. Nous générons de nouvelles questions de type CLEVR en utilisant des références basées sur la similitude (par exemple `` la balle qui a la même couleur que ... '') dans des contextes qui se produisent dans les questions CLEVR mais uniquement avec des références basées sur la localisation (par exemple `` le balle qui est à gauche de ... ''). Nous analysons la généralisation avec zéro ou quelques exemples de CLOSURE après un entraînement sur CLEVR pour un certain nombre de modèles existants ainsi qu'un nouveau modèle.By using the methodology of deep learning that advocates relying more on data and flexible neural models rather than on the expert's knowledge of the domain, the research community has recently achieved remarkable progress in natural language understanding and generation. Nevertheless, it remains unclear whether simply scaling up existing deep learning methods will be sufficient to achieve the goal of using natural language for human-computer interaction. We focus on two related aspects in which current methods appear to require major improvements. The first such aspect is the data inefficiency of deep learning systems: they are known to require extreme amounts of data to perform well. The second aspect is their limited ability to generalize systematically, namely to understand language in situations when the data distribution changes yet the principles of syntax and semantics remain the same. In this thesis, we present four case studies in which we seek to provide more clarity regarding the aforementioned data efficiency and systematic generalization aspects of deep learning approaches to language understanding, as well as to facilitate further work on these topics. In order to separate the problem of representing open-ended real-world knowledge from the problem of core language learning, we conduct all these studies using synthetic languages that are grounded in simple visual environments. In the first article, we study how to train agents to follow compositional instructions in environments with a restricted form of supervision. Namely for every instruction and initial environment configuration we only provide a goal-state instead of a complete trajectory with actions at all steps. We adapt adversarial imitation learning methods to this setting and demonstrate that such a restricted form of data is sufficient to learn compositional meanings of the instructions. Our second article also focuses on instruction following. We develop the BabyAI platform to facilitate further, more extensive and rigorous studies of this setup. The platform features a compositional Baby language with 101910^{19} instructions, whose semantics is precisely defined in a partially-observable gridworld environment. We report baseline results on how much supervision is required to teach the agent certain subsets of Baby language with different training methods, such as reinforcement learning and imitation learning. In the third article we study systematic generalization of visual question answering (VQA) models. In the VQA setting the system must answer compositional questions about images. We construct a dataset of spatial questions about object pairs and evaluate how well different models perform on questions about pairs of objects that never occured in the same question in the training distribution. We show that models in which word meanings are represented by separate modules that perform independent computation generalize much better than models whose design is not explicitly modular. The modular models, however, generalize well only when the modules are connected in an appropriate layout, and our experiments highlight the challenges of learning the layout by end-to-end learning on the training distribution. In our fourth and final article we also study generalization of VQA models to questions outside of the training distribution, but this time using the popular CLEVR dataset of complex questions about 3D-rendered scenes as the platform. We generate novel CLEVR-like questions by using similarity-based references (e.g. ``the ball that has the same color as ...'') in contexts that occur in CLEVR questions but only with location-based references (e.g. ``the ball that is to the left of ...''). We analyze zero- and few- shot generalization to CLOSURE after training on CLEVR for a number of existing models as well as a novel one

    Multimodal representation and learning

    Get PDF
    Recent years have seen an explosion in multimodal data on the web. It is therefore important to perform multimodal learning to understand the web. However, it is challenging to join various modalities because each modality has a different representation and correlational structure. In addition, various modalities generally carry different kinds of information that may provide enrich understanding; for example, the visual signal of a flower may provide happiness; however, its scent might not be pleasant. Multimodal information may be useful to make an informed decision. Therefore, we focus on improving representations from individual modalities to enhance multimodal representation and learning. In this doctoral thesis, we presented techniques to enhance representations from individual and multiple modalities for multimodal applications including classification, cross-modal retrieval, matching and verification on various benchmark datasets

    The Compass, Issue 7

    Get PDF

    Investigating Semantic Alignment in Character Learning of Chinese as a Foreign Language: The Use and Effect of the Imagery Based Encoding Strategy

    Get PDF
    For learners of Chinese as a foreign language (CFL), character learning is frustrating. This research postulated that this difficulty may mainly come from a lack of semantic understanding of character-denoted meanings. Language theories support that when a learner’s semantic meaning increases, the orthographic structures that represent the underlying meanings also improve. This study aimed to reveal CFL learners’ cognitive abilities and processes in visual-semantic learning of Chinese characters. Particularly, this study investigated the process by which English-speaking adolescent CFL learners, at the beginning to intermediate level, made mental images of character-denoted meanings to visually encode and retrieve character forms. Quantitative and qualitative data were gathered from image making questionnaires, writing, and reading tests, after learning characters in three commonly-used teaching methods (i.e., English, pictorial, and verbal). The data were analyzed based on a triangulation of the literature from Neuro-Semantic Language Learning Theory, scientific findings in cognitive psychology, and neuroscience. The study found that participants’ semantic abilities to understand character-denoted meanings emerged, but were still restricted in familiar orthographic forms. The use of the imagery strategy as a semantic ability predicted better performances, most evidently in writing; however, the ability in using the imagery strategy to learn characters was still underdeveloped, and needed to be supported with sufficient contextual information. Implications and further research in visual-semantic learning and teaching characters were suggested

    Multi-behavior Recommendation with SVD Graph Neural Networks

    Full text link
    Graph Neural Networks (GNNs) has been extensively employed in the field of recommender systems, offering users personalized recommendations and yielding remarkable outcomes. Recently, GNNs incorporating contrastive learning have demonstrated promising performance in handling sparse data problem of recommendation system. However, existing contrastive learning methods still have limitations in addressing the cold-start problem and resisting noise interference especially for multi-behavior recommendation. To mitigate the aforementioned issues, the present research posits a GNNs based multi-behavior recommendation model MB-SVD that utilizes Singular Value Decomposition (SVD) graphs to enhance model performance. In particular, MB-SVD considers user preferences under different behaviors, improving recommendation effectiveness while better addressing the cold-start problem. Our model introduces an innovative methodology, which subsume multi-behavior contrastive learning paradigm to proficiently discern the intricate interconnections among heterogeneous manifestations of user behavior and generates SVD graphs to automate the distillation of crucial multi-behavior self-supervised information for robust graph augmentation. Furthermore, the SVD based framework reduces the embedding dimensions and computational load. Thorough experimentation showcases the remarkable performance of our proposed MB-SVD approach in multi-behavior recommendation endeavors across diverse real-world datasets

    Semantic Text Analysis on Social Networks and Data Processing: Review and Future Directions

    Get PDF
    Social network usage is growing exponentially in the most up-to-date decade; though social networks are becoming increasingly popular every day, many users are continuously active social network users. Using Twitter, LinkedIn, Facebook, and other social media sites has become the most convenient way for people. There is an enormous quantity of data produced by users of social networks. The most common part of modern research analysis is instrumental for many social network analysis applications. However, people actively utilize social networking sites and diverse uses of these sites. social media sites handle an immense amount of knowledge and answer these three computational problems, noise, dynamism, and scale. Semantic comprehension of the document, image, and video exchanged in a social network was also an essential topic in network analysis. Utilizing data processing provides vast datasets such as averages, laws, and patterns to discover practical knowledge. Using social media, data analysis was primarily used for machine learning, analysis, information extraction, statistical modelling, data preprocessing, and data interpretation processes. This research intentions to deliver an inclusive overview of social network research and application analyze state-of-the-art social media data analysis methods by reviewing basic concepts, social networks and elements social network research is linked to. Semantic ways of manipulating text in social networks are then clarified, and literature discusses studies before on these themes. Next, the evolving methods in research on social network analysis are discussed, especially in analyzing semantic text on social networks. Finally, subjects and opportunities for future research directions are explained
    corecore