15 research outputs found

    Modelling and analysis of temporal preference drifts using a component-based factorised latent approach

    Get PDF
    In recommender systems, human preferences are identified by a number of individual components with complicated interactions and properties. Recently, the dynamicity of preferences has been the focus of several studies. The changes in user preferences can originate from substantial reasons, like personality shift, or transient and circumstantial ones, like seasonal changes in item popularities. Disregarding these temporal drifts in modelling user preferences can result in unhelpful recommendations. Moreover, different temporal patterns can be associated with various preference domains, and preference components and their combinations. These components comprise preferences over features, preferences over feature values, conditional dependencies between features, socially-influenced preferences, and bias. For example, in the movies domain, the user can change his rating behaviour (bias shift), her preference for genre over language (feature preference shift), or start favouring drama over comedy (feature value preference shift). In this paper, we first propose a novel latent factor model to capture the domain-dependent component-specific temporal patterns in preferences. The component-based approach followed in modelling the aspects of preferences and their temporal effects enables us to arbitrarily switch components on and off. We evaluate the proposed method on three popular recommendation datasets and show that it significantly outperforms the most accurate state-of-the-art static models. The experiments also demonstrate the greater robustness and stability of the proposed dynamic model in comparison with the most successful models to date. We also analyse the temporal behaviour of different preference components and their combinations and show that the dynamic behaviour of preference components is highly dependent on the preference dataset and domain. Therefore, the results also highlight the importance of modelling temporal effects but also underline the advantages of a component-based architecture that is better suited to capture domain-specific balances in the contributions of the aspects

    Song Recommendation for Automatic Playlist Continuation

    Get PDF
    The goal of this project is to develop a recommender system that derives song recommendations from an implicit music dataset provided by the streaming service Spotify. We implemented current baseline systems and then two advancements over the baselines: Feature Enhanced Matrix Factorization and Non-Linear Matrix Factorization. To compare these systems, we took the predicted songs for a given playlist and calculated the performance score based on the accuracy of those results. We then compared the results from these NDCG scores to determine which system performed the best for the given Spotify dataset. Based off of the results, we were able to draw conclusions regarding the design process for an effective recommender system for music data

    Recommender system performance evaluation and prediction: information retrieval perspective

    Full text link
    Tesis doctoral inédita. Universidad Autónoma de Madrid, Escuela Politécnica Superior, octubre de 201

    Optimal Real-Time Bidding for Display Advertising

    Get PDF
    Real-Time Bidding (RTB) is revolutionising display advertising by facilitating a real-time auction for each ad impression. As they are able to use impression-level data, such as user cookies and context information, advertisers can adaptively bid for each ad impression. Therefore, it is important that an advertiser designs an effective bidding strategy which can be abstracted as a function - mapping from the information of a specific ad impression to the bid price. Exactly how this bidding function should be designed is a non-trivial problem. It is a problem which involves multiple factors, such as the campaign-specific key performance indicator (KPI), the campaign lifetime auction volume and the budget. This thesis is focused on the design of automatic solutions to this problem of creating optimised bidding strategies for RTB auctions: strategies which are optimal, that is, from the perspective of an advertiser agent - to maximise the campaign's KPI in relation to the constraints of the auction volume and the budget. The problem is mathematically formulated as a functional optimisation framework where the optimal bidding function can be derived without any functional form restriction. Beyond single-campaign bid optimisation, the proposed framework can be extended to multi-campaign cases, where a portfolio-optimisation solution of auction volume reallocation is performed to maximise the overall profit with a controlled risk. On the model learning side, an unbiased learning scheme is proposed to address the data bias problem resulting from the ad auction selection, where we derive a "bid-aware'' gradient descent algorithm to train unbiased models. Moreover, the robustness of achieving the expected KPIs in a dynamic RTB market is solved with a feedback control mechanism for bid adjustment. To support the theoretic derivations, extensive experiments are carried out based on large-scale real-world data. The proposed solutions have been deployed in three commercial RTB systems in China and the United States. The online A/B tests have demonstrated substantial improvement of the proposed solutions over strong baselines

    BNAIC 2008:Proceedings of BNAIC 2008, the twentieth Belgian-Dutch Artificial Intelligence Conference

    Get PDF

    xxAI - Beyond Explainable AI

    Get PDF
    This is an open access book. Statistical machine learning (ML) has triggered a renaissance of artificial intelligence (AI). While the most successful ML models, including Deep Neural Networks (DNN), have developed better predictivity, they have become increasingly complex, at the expense of human interpretability (correlation vs. causality). The field of explainable AI (xAI) has emerged with the goal of creating tools and models that are both predictive and interpretable and understandable for humans. Explainable AI is receiving huge interest in the machine learning and AI research communities, across academia, industry, and government, and there is now an excellent opportunity to push towards successful explainable AI applications. This volume will help the research community to accelerate this process, to promote a more systematic use of explainable AI to improve models in diverse applications, and ultimately to better understand how current explainable AI methods need to be improved and what kind of theory of explainable AI is needed. After overviews of current methods and challenges, the editors include chapters that describe new developments in explainable AI. The contributions are from leading researchers in the field, drawn from both academia and industry, and many of the chapters take a clear interdisciplinary approach to problem-solving. The concepts discussed include explainability, causability, and AI interfaces with humans, and the applications include image processing, natural language, law, fairness, and climate science.https://digitalcommons.unomaha.edu/isqafacbooks/1000/thumbnail.jp

    xxAI - Beyond Explainable AI

    Get PDF
    This is an open access book. Statistical machine learning (ML) has triggered a renaissance of artificial intelligence (AI). While the most successful ML models, including Deep Neural Networks (DNN), have developed better predictivity, they have become increasingly complex, at the expense of human interpretability (correlation vs. causality). The field of explainable AI (xAI) has emerged with the goal of creating tools and models that are both predictive and interpretable and understandable for humans. Explainable AI is receiving huge interest in the machine learning and AI research communities, across academia, industry, and government, and there is now an excellent opportunity to push towards successful explainable AI applications. This volume will help the research community to accelerate this process, to promote a more systematic use of explainable AI to improve models in diverse applications, and ultimately to better understand how current explainable AI methods need to be improved and what kind of theory of explainable AI is needed. After overviews of current methods and challenges, the editors include chapters that describe new developments in explainable AI. The contributions are from leading researchers in the field, drawn from both academia and industry, and many of the chapters take a clear interdisciplinary approach to problem-solving. The concepts discussed include explainability, causability, and AI interfaces with humans, and the applications include image processing, natural language, law, fairness, and climate science

    xxAI - Beyond Explainable AI

    Get PDF
    This is an open access book. Statistical machine learning (ML) has triggered a renaissance of artificial intelligence (AI). While the most successful ML models, including Deep Neural Networks (DNN), have developed better predictivity, they have become increasingly complex, at the expense of human interpretability (correlation vs. causality). The field of explainable AI (xAI) has emerged with the goal of creating tools and models that are both predictive and interpretable and understandable for humans. Explainable AI is receiving huge interest in the machine learning and AI research communities, across academia, industry, and government, and there is now an excellent opportunity to push towards successful explainable AI applications. This volume will help the research community to accelerate this process, to promote a more systematic use of explainable AI to improve models in diverse applications, and ultimately to better understand how current explainable AI methods need to be improved and what kind of theory of explainable AI is needed. After overviews of current methods and challenges, the editors include chapters that describe new developments in explainable AI. The contributions are from leading researchers in the field, drawn from both academia and industry, and many of the chapters take a clear interdisciplinary approach to problem-solving. The concepts discussed include explainability, causability, and AI interfaces with humans, and the applications include image processing, natural language, law, fairness, and climate science
    corecore