13,774 research outputs found

    A Personalized System for Conversational Recommendations

    Full text link
    Searching for and making decisions about information is becoming increasingly difficult as the amount of information and number of choices increases. Recommendation systems help users find items of interest of a particular type, such as movies or restaurants, but are still somewhat awkward to use. Our solution is to take advantage of the complementary strengths of personalized recommendation systems and dialogue systems, creating personalized aides. We present a system -- the Adaptive Place Advisor -- that treats item selection as an interactive, conversational process, with the program inquiring about item attributes and the user responding. Individual, long-term user preferences are unobtrusively obtained in the course of normal recommendation dialogues and used to direct future conversations with the same user. We present a novel user model that influences both item search and the questions asked during a conversation. We demonstrate the effectiveness of our system in significantly reducing the time and number of interactions required to find a satisfactory item, as compared to a control group of users interacting with a non-adaptive version of the system

    The benefits of opening recommendation to human interaction

    Get PDF
    This paper describes work in progress that uses an interactive recommendation process to construct new objects which are tailored to user preferences. The novelty in our work is moving from the recommendation of static objects like consumer goods, movies or books, towards dynamically-constructed recommendations which are built as part of the recommendation process. As a proof-of-concept we build running or jogging routes for visitors to a city, recommending routes to users according to their preferences and we present details of this system

    A Conversation is Worth A Thousand Recommendations: A Survey of Holistic Conversational Recommender Systems

    Full text link
    Conversational recommender systems (CRS) generate recommendations through an interactive process. However, not all CRS approaches use human conversations as their source of interaction data; the majority of prior CRS work simulates interactions by exchanging entity-level information. As a result, claims of prior CRS work do not generalise to real-world settings where conversations take unexpected turns, or where conversational and intent understanding is not perfect. To tackle this challenge, the research community has started to examine holistic CRS, which are trained using conversational data collected from real-world scenarios. Despite their emergence, such holistic approaches are under-explored. We present a comprehensive survey of holistic CRS methods by summarizing the literature in a structured manner. Our survey recognises holistic CRS approaches as having three components: 1) a backbone language model, the optional use of 2) external knowledge, and/or 3) external guidance. We also give a detailed analysis of CRS datasets and evaluation methods in real application scenarios. We offer our insight as to the current challenges of holistic CRS and possible future trends.Comment: Accepted by 5th KaRS Workshop @ ACM RecSys 2023, 8 page

    Personalized Memory Transfer for Conversational Recommendation Systems

    Get PDF
    Dialogue systems are becoming an increasingly common part of many users\u27 daily routines. Natural language serves as a convenient interface to express our preferences with the underlying systems. In this work, we implement a full-fledged Conversational Recommendation System, mainly focusing on learning user preferences through online conversations. Compared to the traditional collaborative filtering setting where feedback is provided quantitatively, conversational users may only indicate their preferences at a high level with inexact item mentions in the form of natural language chit-chat. This makes it harder for the system to correctly interpret user intent and in turn provide useful recommendations to the user. To tackle the ambiguities in natural language conversations, we propose Personalized Memory Transfer (PMT) which learns a personalized model in an online manner by leveraging a key-value memory structure to distill user feedback directly from conversations. This memory structure enables the integration of prior knowledge to transfer existing item representations/preferences and natural language representations. We also implement a retrieval based response generation module, where the system in addition to recommending items to the user, also responds to the user, either to elicit more information regarding the user intent or just for a casual chit-chat. The experiments were conducted on two public datasets and the results demonstrate the effectiveness of the proposed approach

    Chain-of-Choice Hierarchical Policy Learning for Conversational Recommendation

    Full text link
    Conversational Recommender Systems (CRS) illuminate user preferences via multi-round interactive dialogues, ultimately navigating towards precise and satisfactory recommendations. However, contemporary CRS are limited to inquiring binary or multi-choice questions based on a single attribute type (e.g., color) per round, which causes excessive rounds of interaction and diminishes the user's experience. To address this, we propose a more realistic and efficient conversational recommendation problem setting, called Multi-Type-Attribute Multi-round Conversational Recommendation (MTAMCR), which enables CRS to inquire about multi-choice questions covering multiple types of attributes in each round, thereby improving interactive efficiency. Moreover, by formulating MTAMCR as a hierarchical reinforcement learning task, we propose a Chain-of-Choice Hierarchical Policy Learning (CoCHPL) framework to enhance both the questioning efficiency and recommendation effectiveness in MTAMCR. Specifically, a long-term policy over options (i.e., ask or recommend) determines the action type, while two short-term intra-option policies sequentially generate the chain of attributes or items through multi-step reasoning and selection, optimizing the diversity and interdependence of questioning attributes. Finally, extensive experiments on four benchmarks demonstrate the superior performance of CoCHPL over prevailing state-of-the-art methods.Comment: Release with source cod

    Foundation Metrics: Quantifying Effectiveness of Healthcare Conversations powered by Generative AI

    Full text link
    Generative Artificial Intelligence is set to revolutionize healthcare delivery by transforming traditional patient care into a more personalized, efficient, and proactive process. Chatbots, serving as interactive conversational models, will probably drive this patient-centered transformation in healthcare. Through the provision of various services, including diagnosis, personalized lifestyle recommendations, and mental health support, the objective is to substantially augment patient health outcomes, all the while mitigating the workload burden on healthcare providers. The life-critical nature of healthcare applications necessitates establishing a unified and comprehensive set of evaluation metrics for conversational models. Existing evaluation metrics proposed for various generic large language models (LLMs) demonstrate a lack of comprehension regarding medical and health concepts and their significance in promoting patients' well-being. Moreover, these metrics neglect pivotal user-centered aspects, including trust-building, ethics, personalization, empathy, user comprehension, and emotional support. The purpose of this paper is to explore state-of-the-art LLM-based evaluation metrics that are specifically applicable to the assessment of interactive conversational models in healthcare. Subsequently, we present an comprehensive set of evaluation metrics designed to thoroughly assess the performance of healthcare chatbots from an end-user perspective. These metrics encompass an evaluation of language processing abilities, impact on real-world clinical tasks, and effectiveness in user-interactive conversations. Finally, we engage in a discussion concerning the challenges associated with defining and implementing these metrics, with particular emphasis on confounding factors such as the target audience, evaluation methods, and prompt techniques involved in the evaluation process.Comment: 13 pages, 4 figures, 2 tables, journal pape

    Utilizing Teacher Response to Help Students Meet and Transfer First-Year Composition Course Objectives

    Full text link
    For decades, considerable scholarship has explored how teachers can respond more effectively to student writing. There has also been significant research on how first-year-composition concepts can be transferred by students to other arenas of discourse outside of this required course. This thesis begins with a brief discussion on the meaning of transfer. Then, with the Council of Writing Program Administratorsā€™ Outcomes (knowledge of conventions, rhetorical knowledge, critical thinking, processes) as a starting point, I redefine and pare down the seven response modes described by Elaine O. Lees to five types of response (calling for correction, reminding, explaining, suggesting, and assigning) designed to create a framework for understanding how teachers can respond to student writing more effectively. Additionally, four recommendations are presented for maximizing the effectiveness of teacher response, while providing students a voice in the conversation on the page. The first recommendation is for teachers to underline content in the draft, calling the studentā€™s attention to issues in the text they must revise or to a suggestion the teacher has made. The second recommendation is to use peer response as an extension of teacher response by having peer groups work together to address each comment provided by the teacher on their drafts. The third recommendation calls on teachers to take an individualized method of response based on the disciplines students plan on joining. The final recommendation is the inclusion of critical thinking challenges that inquires about the studentā€™s source vetting and tests their logic and reasoning skills through additional questioning and assigning within the teacher response. The purpose of this thesis is to theorize how the use of these recommendations and response types can serve as a catalyst for objectives to be met and for transfer to occur for FYC students

    Contextual Understanding in Neural Dialog Systems: the Integration of External Knowledge Graphs for Generating Coherent and Knowledge-rich Conversations

    Get PDF
    The integration of external knowledge graphs has emerged as a powerful approach to enrich conversational AI systems with coherent and knowledge-rich conversations. This paper provides an overview of the integration process and highlights its benefits. Knowledge graphs serve as structured representations of information, capturing the relationships between entities through nodes and edges. They offer an organized and efficient means of representing factual knowledge. External knowledge graphs, such as DBpedia, Wikidata, Freebase, and Google's Knowledge Graph, are pre-existing repositories that encompass a wide range of information across various domains. These knowledge graphs are compiled by aggregating data from diverse sources, including online encyclopedias, databases, and structured repositories. To integrate an external knowledge graph into a conversational AI system, a connection needs to be established between the system and the knowledge graph. This can be achieved through APIs or by importing a copy of the knowledge graph into the AI system's internal storage. Once integrated, the conversational AI system can query the knowledge graph to retrieve relevant information when a user poses a question or makes a statement. When analyzing user inputs, the conversational AI system identifies entities or concepts that require additional knowledge. It then formulates queries to retrieve relevant information from the integrated knowledge graph. These queries may involve searching for specific entities, retrieving related entities, or accessing properties and attributes associated with the entities. The obtained information is used to generate coherent and knowledge-rich responses. By integrating external knowledge graphs, conversational AI systems can augment their internal knowledge base and provide more accurate and up-to-date responses. The retrieved information allows the system to extract relevant facts, provide detailed explanations, or offer additional context. This integration empowers AI systems to deliver comprehensive and insightful responses that enhance user experience. As external knowledge graphs are regularly updated with new information and improvements, conversational AI systems should ensure their integrated knowledge graphs remain current. This can be achieved through periodic updates, either by synchronizing the system's internal representation with the external knowledge graph or by querying the external knowledge graph in real-time

    Leveraging Large Language Models in Conversational Recommender Systems

    Full text link
    A Conversational Recommender System (CRS) offers increased transparency and control to users by enabling them to engage with the system through a real-time multi-turn dialogue. Recently, Large Language Models (LLMs) have exhibited an unprecedented ability to converse naturally and incorporate world knowledge and common-sense reasoning into language understanding, unlocking the potential of this paradigm. However, effectively leveraging LLMs within a CRS introduces new technical challenges, including properly understanding and controlling a complex conversation and retrieving from external sources of information. These issues are exacerbated by a large, evolving item corpus and a lack of conversational data for training. In this paper, we provide a roadmap for building an end-to-end large-scale CRS using LLMs. In particular, we propose new implementations for user preference understanding, flexible dialogue management and explainable recommendations as part of an integrated architecture powered by LLMs. For improved personalization, we describe how an LLM can consume interpretable natural language user profiles and use them to modulate session-level context. To overcome conversational data limitations in the absence of an existing production CRS, we propose techniques for building a controllable LLM-based user simulator to generate synthetic conversations. As a proof of concept we introduce RecLLM, a large-scale CRS for YouTube videos built on LaMDA, and demonstrate its fluency and diverse functionality through some illustrative example conversations
    • ā€¦
    corecore