131 research outputs found

    Cross-domain recommendation with consistent knowledge transfer by subspace alignment

    Full text link
    © Springer Nature Switzerland AG 2018. Recommender systems have drawn great attention from both academic area and practical websites. One challenging and common problem in many recommendation methods is data sparsity, due to the limited number of observed user interaction with the products/services. Cross-domain recommender systems are developed to tackle this problem through transferring knowledge from a source domain with relatively abundant data to the target domain with scarce data. Existing cross-domain recommendation methods assume that similar user groups have similar tastes on similar item groups but ignore the divergence between the source and target domains, resulting in decrease in accuracy. In this paper, we propose a cross-domain recommendation method transferring consistent group-level knowledge through aligning the source subspace with the target one. Through subspace alignment, the discrepancy caused by the domain-shift is reduced and the knowledge shared local top-n recommendation via refined item-user bi-clustering two domains is ensured to be consistent. Experiments are conducted on five real-world datasets in three categories: movies, books and music. The results for nine cross-domain recommendation tasks show that our proposed method has improved the accuracy compared with five benchmarks

    A Cross-Domain Recommender System with Kernel-Induced Knowledge Transfer for Overlapping Entities

    Full text link
    © 2012 IEEE. The aim of recommender systems is to automatically identify user preferences within collected data, then use those preferences to make recommendations that help with decisions. However, recommender systems suffer from data sparsity problem, which is particularly prevalent in newly launched systems that have not yet had enough time to amass sufficient data. As a solution, cross-domain recommender systems transfer knowledge from a source domain with relatively rich data to assist recommendations in the target domain. These systems usually assume that the entities either fully overlap or do not overlap at all. In practice, it is more common for the entities in the two domains to partially overlap. Moreover, overlapping entities may have different expressions in each domain. Neglecting these two issues reduces prediction accuracy of cross-domain recommender systems in the target domain. To fully exploit partially overlapping entities and improve the accuracy of predictions, this paper presents a cross-domain recommender system based on kernel-induced knowledge transfer, called KerKT. Domain adaptation is used to adjust the feature spaces of overlapping entities, while diffusion kernel completion is used to correlate the non-overlapping entities between the two domains. With this approach, knowledge is effectively transferred through the overlapping entities, thus alleviating data sparsity issues. Experiments conducted on four data sets, each with three sparsity ratios, show that KerKT has 1.13%-20% better prediction accuracy compared with six benchmarks. In addition, the results indicate that transferring knowledge from the source domain to the target domain is both possible and beneficial with even small overlaps

    Socializing the Semantic Gap: A Comparative Survey on Image Tag Assignment, Refinement and Retrieval

    Get PDF
    Where previous reviews on content-based image retrieval emphasize on what can be seen in an image to bridge the semantic gap, this survey considers what people tag about an image. A comprehensive treatise of three closely linked problems, i.e., image tag assignment, refinement, and tag-based image retrieval is presented. While existing works vary in terms of their targeted tasks and methodology, they rely on the key functionality of tag relevance, i.e. estimating the relevance of a specific tag with respect to the visual content of a given image and its social context. By analyzing what information a specific method exploits to construct its tag relevance function and how such information is exploited, this paper introduces a taxonomy to structure the growing literature, understand the ingredients of the main works, clarify their connections and difference, and recognize their merits and limitations. For a head-to-head comparison between the state-of-the-art, a new experimental protocol is presented, with training sets containing 10k, 100k and 1m images and an evaluation on three test sets, contributed by various research groups. Eleven representative works are implemented and evaluated. Putting all this together, the survey aims to provide an overview of the past and foster progress for the near future.Comment: to appear in ACM Computing Survey

    Prediction, Recommendation and Group Analytics Models in the domain of Mashup Services and Cyber-Argumentation Platform

    Get PDF
    Mashup application development is becoming a widespread software development practice due to its appeal for a shorter application development period. Application developers usually use web APIs from different sources to create a new streamlined service and provide various features to end-users. This kind of practice saves time, ensures reliability, accuracy, and security in the developed applications. Mashup application developers integrate these available APIs into their applications. Still, they have to go through thousands of available web APIs and chose only a few appropriate ones for their application. Recommending relevant web APIs might help application developers in this situation. However, very low API invocation from mashup applications creates a sparse mashup-web API dataset for the recommendation models to learn about the mashups and their web API invocation pattern. One research aims to analyze these mashup-specific critical issues, look for supplemental information in the mashup domain, and develop web API recommendation models for mashup applications. The developed recommendation model generates useful and accurate web APIs to reduce the impact of low API invocations in mashup application development. Cyber-Argumentation platform also faces a similarly challenging issue. In large-scale cyber argumentation platforms, participants express their opinions, engage with one another, and respond to feedback and criticism from others in discussing important issues online. Argumentation analysis tools capture the collective intelligence of the participants and reveal hidden insights from the underlying discussions. However, such analysis requires that the issues have been thoroughly discussed and participant’s opinions are clearly expressed and understood. Participants typically focus only on a few ideas and leave others unacknowledged and underdiscussed. This generates a limited dataset to work with, resulting in an incomplete analysis of issues in the discussion. One solution to this problem would be to develop an opinion prediction model for cyber-argumentation. This model would predict participant’s opinions on different ideas that they have not explicitly engaged. In cyber-argumentation, individuals interact with each other without any group coordination. However, the implicit group interaction can impact the participating user\u27s opinion, attitude, and discussion outcome. One of the objectives of this research work is to analyze different group analytics in the cyber-argumentation environment. The objective is to design an experiment to inspect whether the critical concepts of the Social Identity Model of Deindividuation Effects (SIDE) are valid in our argumentation platform. This experiment can help us understand whether anonymity and group sense impact user\u27s behavior in our platform. Another section is about developing group interaction models to help us understand different aspects of group interactions in the cyber-argumentation platform. These research works can help develop web API recommendation models tailored for mashup-specific domains and opinion prediction models for the cyber-argumentation specific area. Primarily these models utilize domain-specific knowledge and integrate them with traditional prediction and recommendation approaches. Our work on group analytic can be seen as the initial steps to understand these group interactions

    “WARES”, a Web Analytics Recommender System

    Full text link
    Il est difficile d'imaginer des entreprises modernes sans analyse, c'est une tendance dans les entreprises modernes, même les petites entreprises et les entrepreneurs individuels commencent à utiliser des outils d'analyse d'une manière ou d'une autre pour leur entreprise. Pas étonnant qu'il existe un grand nombre d'outils différents pour les différents domaines, ils varient dans le but de simples statistiques d'amis et de visites pour votre page Facebook à grands et sophistiqués dans le cas des systèmes conçus pour les grandes entreprises, ils pourraient être shareware ou payés. Parfois, vous devez passer une formation spéciale, être un spécialiste certifiés, ou même avoir un diplôme afin d'être en mesure d'utiliser l'outil d'analyse. D'autres outils offrent une interface d’utilisateur simple, avec des tableaux de bord, pour satisfaire leur compréhension d’information pour tous ceux qui les ont vus pour la première fois. Ce travail sera consacré aux outils d'analyse Web. Quoi qu'il en soit pour tous ceux qui pensent à utiliser l'analyse pour ses propres besoins se pose une question: "quel outil doit je utiliser, qui convient à mes besoins, et comment payer moins et obtenir un gain maximum". Dans ce travail je vais essayer de donner une réponse sur cette question en proposant le système de recommandation pour les outils analytiques web –WARES, qui aideront l'utilisateur avec cette tâche "simple". Le système WARES utilise l'approche hybride, mais surtout, utilise des techniques basées sur le contenu pour faire des suggestions. Le système utilise certains ratings initiaux faites par utilisateur, comme entrée, pour résoudre le problème du “démarrage à froid”, offrant la meilleure solution possible en fonction des besoins des utilisateurs. Le besoin de consultations coûteuses avec des experts ou de passer beaucoup d'heures sur Internet, en essayant de trouver le bon outil. Le système lui–même devrait effectuer une recherche en ligne en utilisant certaines données préalablement mises en cache dans la base de données hors ligne, représentée comme une ontologie d'outils analytiques web existants extraits lors de la recherche en ligne précédente.It is hard to imagine modern business without analytics; it is a trend in modern business, even small companies and individual entrepreneurs start using analytics tools, in one way or another, for their business. Not surprising that there exist many different tools for different domains, they vary in purpose from simple friends and visits statistic for your Facebook page, to big and sophisticated systems designed for the big corporations, they could be free or paid. Sometimes you need to pass special training, be a certified specialist, or even have a degree to be able to use analytics tool, other tools offers simple user interface with dashboards for easy understanding and availability for everyone who saw them for the first time. Anyway, for everyone who is thinking about using analytics for his/her own needs stands a question: “what tool should I use, which one suits my needs and how to pay less and get maximum gain”. In this work, I will try to give an answer to this question by proposing a recommender tool, which will help the user with this “simple task”. This paper is devoted to the creation of WARES, as reduction from Web Analytics REcommender System. Proposed recommender system uses hybrid approach, but mostly, utilize content–based techniques for making suggestions, while using some user’s ratings as an input for “cold start” search. System produces recommendations depending on user’s needs, also allowing quick adjustments in selection without need of expensive consultations with experts or spending lots of hours for Internet search, trying to find out the right tool. The system itself should perform as an online search using some pre–cached data in offline database, represented as an ontology of existing web analytics tools, extracted during the previous online search

    Matching algorithms : fundamentals, applications and challenges

    Get PDF
    Matching plays a vital role in the rational allocation of resources in many areas, ranging from market operation to people's daily lives. In economics, the term matching theory is coined for pairing two agents in a specific market to reach a stable or optimal state. In computer science, all branches of matching problems have emerged, such as the question-answer matching in information retrieval, user-item matching in a recommender system, and entity-relation matching in the knowledge graph. A preference list is the core element during a matching process, which can either be obtained directly from the agents or generated indirectly by prediction. Based on the preference list access, matching problems are divided into two categories, i.e., explicit matching and implicit matching. In this paper, we first introduce the matching theory's basic models and algorithms in explicit matching. The existing methods for coping with various matching problems in implicit matching are reviewed, such as retrieval matching, user-item matching, entity-relation matching, and image matching. Furthermore, we look into representative applications in these areas, including marriage and labor markets in explicit matching and several similarity-based matching problems in implicit matching. Finally, this survey paper concludes with a discussion of open issues and promising future directions in the field of matching. © 2017 IEEE. **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Jing Ren, Xia Feng, Nargiz Sultanova" is provided in this record*

    Neural recommender models for sparse and skewed behavioral data

    Get PDF
    Modern online platforms offer recommendations and personalized search and services to a large and diverse user base while still aiming to acquaint users with the broader community on the platform. Prior work backed by large volumes of user data has shown that user retention is reliant on catering to their specific eccentric tastes, in addition to providing them popular services or content on the platform. Long-tailed distributions are a fundamental characteristic of human activity, owing to the bursty nature of human attention. As a result, we often observe skew in data facets that involve human interaction. While there are superficial similarities to Zipf's law in textual data and other domains, the challenges with user data extend further. Individual words may have skewed frequencies in the corpus, but the long-tail words by themselves do not significantly impact downstream text-mining tasks. On the contrary, while sparse users (a majority on most online platforms) contribute little to the training data, they are equally crucial at inference time. Perhaps more so, since they are likely to churn. In this thesis, we study platforms and applications that elicit user participation in rich social settings incorporating user-generated content, user-user interaction, and other modalities of user participation and data generation. For instance, users on the Yelp review platform participate in a follower-followee network and also create and interact with review text (two modalities of user data). Similarly, community question-answer (CQA) platforms incorporate user interaction and collaboratively authored content over diverse domains and discussion threads. Since user participation is multimodal, we develop generalizable abstractions beyond any single data modality. Specifically, we aim to address the distributional mismatch that occurs with user data independent of dataset specifics; While a minority of the users generates most training samples, it is insufficient only to learn the preferences of this subset of users. As a result, the data's overall skew and individual users' sparsity are closely interlinked: sparse users with uncommon preferences are under-represented. Thus, we propose to treat these problems jointly with a skew-aware grouping mechanism that iteratively sharpens the identification of preference groups within the user population. As a result, we improve user characterization; content recommendation and activity prediction (+6-22% AUC, +6-43% AUC, +12-25% RMSE over state-of-the-art baselines), primarily for users with sparse activity. The size of the item or content inventories compounds the skew problem. Recommendation models can achieve very high aggregate performance while recommending only a tiny proportion of the inventory (as little as 5%) to users. We propose a data-driven solution guided by the aggregate co-occurrence information across items in the dataset. We specifically note that different co-occurrences are not equally significant; For example, some co-occurring items are easily substituted while others are not. We develop a self-supervised learning framework where the aggregate co-occurrences guide the recommendation problem while providing room to learn these variations among the item associations. As a result, we improve coverage to ~100% (up from 5%) of the inventory and increase long-tail item recall up to 25%. We also note that the skew and sparsity problems repeat across data modalities. For instance, social interactions and review content both exhibit aggregate skew, although individual users who actively generate reviews may not participate socially and vice-versa. It is necessary to differentially weight and merge different data sources for each user towards inference tasks in such cases. We show that the problem is inherently adversarial since the user participation modalities compete to describe a user accurately. We develop a framework to unify these representations while algorithmically tackling mode collapse, a well-known pitfall with adversarial models. A more challenging but important instantiation of sparsity is the few-shot setting or cross-domain setting. We may only have a single or a few interactions for users or items in the sparse domains or partitions. We show that contextualizing user-item interactions helps us infer behavioral invariants in the dense domain, allowing us to correlate sparse participants to their active counterparts (resulting in 3x faster training, ~19% recall gains in multi-domain settings). Finally, we consider the multi-task setting, where the platform incorporates multiple distinct recommendations and prediction tasks for each user. A single-user representation is insufficient for users who exhibit different preferences along each dimension. At the same time, it is counter-productive to handle correlated prediction or inference tasks in isolation. We develop a multi-faceted representation approach grounded on residual learning with heterogeneous knowledge graph representations, which provides us an expressive data representation for specialized domains and applications with multimodal user data. We achieve knowledge sharing by unifying task-independent and task-specific representations of each entity with a unified knowledge graph framework. In each chapter, we also discuss and demonstrate how the proposed frameworks directly incorporate a wide range of gradient-optimizable recommendation and behavior models, maximizing their applicability and pertinence to user-centered inference tasks and platforms

    Sensing Human Sentiment via Social Media Images: Methodologies and Applications

    Get PDF
    abstract: Social media refers computer-based technology that allows the sharing of information and building the virtual networks and communities. With the development of internet based services and applications, user can engage with social media via computer and smart mobile devices. In recent years, social media has taken the form of different activities such as social network, business network, text sharing, photo sharing, blogging, etc. With the increasing popularity of social media, it has accumulated a large amount of data which enables understanding the human behavior possible. Compared with traditional survey based methods, the analysis of social media provides us a golden opportunity to understand individuals at scale and in turn allows us to design better services that can tailor to individuals’ needs. From this perspective, we can view social media as sensors, which provides online signals from a virtual world that has no geographical boundaries for the real world individual's activity. One of the key features for social media is social, where social media users actively interact to each via generating content and expressing the opinions, such as post and comment in Facebook. As a result, sentiment analysis, which refers a computational model to identify, extract or characterize subjective information expressed in a given piece of text, has successfully employs user signals and brings many real world applications in different domains such as e-commerce, politics, marketing, etc. The goal of sentiment analysis is to classify a user’s attitude towards various topics into positive, negative or neutral categories based on textual data in social media. However, recently, there is an increasing number of people start to use photos to express their daily life on social media platforms like Flickr and Instagram. Therefore, analyzing the sentiment from visual data is poise to have great improvement for user understanding. In this dissertation, I study the problem of understanding human sentiments from large scale collection of social images based on both image features and contextual social network features. We show that neither visual features nor the textual features are by themselves sufficient for accurate sentiment prediction. Therefore, we provide a way of using both of them, and formulate sentiment prediction problem in two scenarios: supervised and unsupervised. We first show that the proposed framework has flexibility to incorporate multiple modalities of information and has the capability to learn from heterogeneous features jointly with sufficient training data. Secondly, we observe that negative sentiment may related to human mental health issues. Based on this observation, we aim to understand the negative social media posts, especially the post related to depression e.g., self-harm content. Our analysis, the first of its kind, reveals a number of important findings. Thirdly, we extend the proposed sentiment prediction task to a general multi-label visual recognition task to demonstrate the methodology flexibility behind our sentiment analysis model.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Generative Adversarial Networks (GANs): Challenges, Solutions, and Future Directions

    Full text link
    Generative Adversarial Networks (GANs) is a novel class of deep generative models which has recently gained significant attention. GANs learns complex and high-dimensional distributions implicitly over images, audio, and data. However, there exists major challenges in training of GANs, i.e., mode collapse, non-convergence and instability, due to inappropriate design of network architecture, use of objective function and selection of optimization algorithm. Recently, to address these challenges, several solutions for better design and optimization of GANs have been investigated based on techniques of re-engineered network architectures, new objective functions and alternative optimization algorithms. To the best of our knowledge, there is no existing survey that has particularly focused on broad and systematic developments of these solutions. In this study, we perform a comprehensive survey of the advancements in GANs design and optimization solutions proposed to handle GANs challenges. We first identify key research issues within each design and optimization technique and then propose a new taxonomy to structure solutions by key research issues. In accordance with the taxonomy, we provide a detailed discussion on different GANs variants proposed within each solution and their relationships. Finally, based on the insights gained, we present the promising research directions in this rapidly growing field.Comment: 42 pages, Figure 13, Table
    corecore