73 research outputs found

    Service-Aware Personalized Item Recommendation

    Get PDF

    Justification of Recommender Systems Results: A Service-based Approach

    Get PDF
    With the increasing demand for predictable and accountable Artificial Intelligence, the ability to explain or justify recommender systems results by specifying how items are suggested, or why they are relevant, has become a primary goal. However, current models do not explicitly represent the services and actors that the user might encounter during the overall interaction with an item, from its selection to its usage. Thus, they cannot assess their impact on the user's experience. To address this issue, we propose a novel justification approach that uses service models to (i) extract experience data from reviews concerning all the stages of interaction with items, at different granularity levels, and (ii) organize the justification of recommendations around those stages. In a user study, we compared our approach with baselines reflecting the state of the art in the justification of recommender systems results. The participants evaluated the Perceived User Awareness Support provided by our service-based justification models higher than the one offered by the baselines. Moreover, our models received higher Interface Adequacy and Satisfaction evaluations by users having different levels of Curiosity or low Need for Cognition (NfC). Differently, high NfC participants preferred a direct inspection of item reviews. These findings encourage the adoption of service models to justify recommender systems results but suggest the investigation of personalization strategies to suit diverse interaction needs

    Leveraging Large Language Models in Conversational Recommender Systems

    Full text link
    A Conversational Recommender System (CRS) offers increased transparency and control to users by enabling them to engage with the system through a real-time multi-turn dialogue. Recently, Large Language Models (LLMs) have exhibited an unprecedented ability to converse naturally and incorporate world knowledge and common-sense reasoning into language understanding, unlocking the potential of this paradigm. However, effectively leveraging LLMs within a CRS introduces new technical challenges, including properly understanding and controlling a complex conversation and retrieving from external sources of information. These issues are exacerbated by a large, evolving item corpus and a lack of conversational data for training. In this paper, we provide a roadmap for building an end-to-end large-scale CRS using LLMs. In particular, we propose new implementations for user preference understanding, flexible dialogue management and explainable recommendations as part of an integrated architecture powered by LLMs. For improved personalization, we describe how an LLM can consume interpretable natural language user profiles and use them to modulate session-level context. To overcome conversational data limitations in the absence of an existing production CRS, we propose techniques for building a controllable LLM-based user simulator to generate synthetic conversations. As a proof of concept we introduce RecLLM, a large-scale CRS for YouTube videos built on LaMDA, and demonstrate its fluency and diverse functionality through some illustrative example conversations

    Context-Aware Personalized Point-of-Interest Recommendation System

    Get PDF
    The increasing volume of information has created overwhelming challenges to extract the relevant items manually. Fortunately, the online systems, such as e-commerce (e.g., Amazon), location-based social networks (LBSNs) (e.g., Facebook) among many others have the ability to track end users\u27 browsing and consumption experiences. Such explicit experiences (e.g., ratings) and many implicit contexts (e.g., social, spatial, temporal, and categorical) are useful in preference elicitation and recommendation. As an emerging branch of information filtering, the recommendation systems are already popular in many domains, such as movies (e.g., YouTube), music (e.g., Pandora), and Point-of-Interest (POI) (e.g., Yelp). The POI domain has many contextual challenges (e.g., spatial (preferences to a near place), social (e.g., friend\u27s influence), temporal (e.g., popularity at certain time), categorical (similar preferences to places with same category), locality of POI, etc.) that can be crucial for an efficient recommendation. The user reviews shared across different social networks provide granularity in users\u27 consumption experience. From the data mining and machine learning perspective, following three research directions are identified and considered relevant to an efficient context-aware POI recommendation, (1) incorporation of major contexts into a single model and a detailed analysis of the impact of those contexts, (2) exploitation of user activity and location influence to model hierarchical preferences, and (3) exploitation of user reviews to formulate the aspect opinion relation and to generate explanation for recommendation. This dissertation presents different machine learning and data mining-based solutions to address the above-mentioned research problems, including, (1) recommendation models inspired from contextualized ranking and matrix factorization that incorporate the major contexts and help in analysis of their importance, (2) hierarchical and matrix-factorization models that formulate users\u27 activity and POI influences on different localities that model hierarchical preferences and generate individual and sequence recommendations, and (3) graphical models inspired from natural language processing and neural networks to generate recommendations augmented with aspect-based explanations

    Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment

    Full text link
    Ensuring alignment, which refers to making models behave in accordance with human intentions [1,2], has become a critical task before deploying large language models (LLMs) in real-world applications. For instance, OpenAI devoted six months to iteratively aligning GPT-4 before its release [3]. However, a major challenge faced by practitioners is the lack of clear guidance on evaluating whether LLM outputs align with social norms, values, and regulations. This obstacle hinders systematic iteration and deployment of LLMs. To address this issue, this paper presents a comprehensive survey of key dimensions that are crucial to consider when assessing LLM trustworthiness. The survey covers seven major categories of LLM trustworthiness: reliability, safety, fairness, resistance to misuse, explainability and reasoning, adherence to social norms, and robustness. Each major category is further divided into several sub-categories, resulting in a total of 29 sub-categories. Additionally, a subset of 8 sub-categories is selected for further investigation, where corresponding measurement studies are designed and conducted on several widely-used LLMs. The measurement results indicate that, in general, more aligned models tend to perform better in terms of overall trustworthiness. However, the effectiveness of alignment varies across the different trustworthiness categories considered. This highlights the importance of conducting more fine-grained analyses, testing, and making continuous improvements on LLM alignment. By shedding light on these key dimensions of LLM trustworthiness, this paper aims to provide valuable insights and guidance to practitioners in the field. Understanding and addressing these concerns will be crucial in achieving reliable and ethically sound deployment of LLMs in various applications

    Aspect-based sentiment analysis for social recommender systems.

    Get PDF
    Social recommender systems harness knowledge from social content, experiences and interactions to provide recommendations to users. The retrieval and ranking of products, using similarity knowledge, is central to the recommendation architecture. To enhance recommendation performance, having an effective representation of products is essential. Social content such as product reviews contain experiential knowledge in the form of user opinions centred on product aspects. Making sense of these for recommender systems requires the capability to reason with text. However, Natural Language Processing (NLP) toolkits trained on formal text documents encounter challenges when analysing product reviews, due to their informal nature. This calls for novel methods and algorithms to capitalise on textual content in product reviews together with other knowledge resources. In this thesis, methods to utilise user purchase preference knowledge - inferred from the viewed and purchased product behaviour - are proposed to overcome the challenges encountered in analysing textual content. This thesis introduces three major methods to improve the performance of social recommender systems. First, an effective aspect extraction method that combines strengths of both dependency relations and frequent noun analysis is proposed. Thereafter, this thesis presents how extracted aspects can be used to structure opinionated content enabling sentiment knowledge to enrich product representations. Second, a novel method to integrate aspect-level sentiment analysis and implicit knowledge extracted from users' product purchase preferences analysis is presented. The role of sentiment distribution and threshold analysis on the proposed integration method is also explored. Third, this thesis explores the utility of feature selection techniques to rank and select relevant aspects for product representation. For this purpose, this thesis presents how established dimensionality reduction approaches from text classification can be employed to select a subset of aspects for recommendation purposes. Finally, a comprehensive evaluation of all the proposed methods in this thesis is presented using a computational measure of 'better' and Mean Average Precision (MAP) with seven real-world datasets
    • …
    corecore