170 research outputs found

    Data-driven decision making in Critique-based recommenders: from a critique to social media data

    Full text link
    In the last decade there have been a large number of proposals in the field of Critique-based Recommenders. Critique-based recommenders are data-driven in their nature sincethey use a conversational cyclical recommendation process to elicit user feedback. In theliterature, the proposals made differ mainly in two aspects: in the source of data and in howthis data is analyzed to extract knowledge for providing users with recommendations. Inthis paper, we propose new algorithms that address these two aspects. Firstly, we propose anew algorithm, called HOR, which integrates several data sources, such as current user pref-erences (i.e., a critique), product descriptions, previous critiquing sessions by other users,and users' opinions expressed as ratings on social media web sites. Secondly, we propose adding compatibility and weighting scores to turn user behavior into knowledge to HOR and a previous state-of-the-art approach named HGR to help both algorithms make smarter recommendations. We have evaluated our proposals in two ways: with a simulator and withreal users. A comparison of our proposals with state-of-the-art approaches shows that thenew recommendation algorithms significantly outperform previous ones

    Evaluating Conversational Recommender Systems: A Landscape of Research

    Full text link
    Conversational recommender systems aim to interactively support online users in their information search and decision-making processes in an intuitive way. With the latest advances in voice-controlled devices, natural language processing, and AI in general, such systems received increased attention in recent years. Technically, conversational recommenders are usually complex multi-component applications and often consist of multiple machine learning models and a natural language user interface. Evaluating such a complex system in a holistic way can therefore be challenging, as it requires (i) the assessment of the quality of the different learning components, and (ii) the quality perception of the system as a whole by users. Thus, a mixed methods approach is often required, which may combine objective (computational) and subjective (perception-oriented) evaluation techniques. In this paper, we review common evaluation approaches for conversational recommender systems, identify possible limitations, and outline future directions towards more holistic evaluation practices

    VR Technologies in Cultural Heritage

    Get PDF
    This open access book constitutes the refereed proceedings of the First International Conference on VR Technologies in Cultural Heritage, VRTCH 2018, held in Brasov, Romania in May 2018. The 13 revised full papers along with the 5 short papers presented were carefully reviewed and selected from 21 submissions. The papers of this volume are organized in topical sections on data acquisition and modelling, visualization methods / audio, sensors and actuators, data management, restoration and digitization, cultural tourism

    VR Technologies in Cultural Heritage

    Get PDF
    This open access book constitutes the refereed proceedings of the First International Conference on VR Technologies in Cultural Heritage, VRTCH 2018, held in Brasov, Romania in May 2018. The 13 revised full papers along with the 5 short papers presented were carefully reviewed and selected from 21 submissions. The papers of this volume are organized in topical sections on data acquisition and modelling, visualization methods / audio, sensors and actuators, data management, restoration and digitization, cultural tourism

    Critiquing-based Modeling of Subjective Preferences

    Get PDF
    Funding Information: This work has been supported by Helsinki Institute for Information Technology HIIT. Publisher Copyright: Š 2022 ACM.Applications designed for entertainment and other non-instrumental purposes are challenging to optimize because the relationships between system parameters and user experience can be unclear. Ideally, we would crowdsource these design questions, but existing approaches are geared towards evaluation or ranking discrete choices and not for optimizing over continuous parameter spaces. In addition, users are accustomed to informally expressing opinions about experiences as critiques (e.g. it's too cold, too spicy, too big), rather than giving precise feedback as an optimization algorithm would require. Unfortunately, it can be difficult to analyze qualitative feedback, especially in the context of quantitative modeling. In this article, we present collective criticism, a critiquing-based approach for modeling relationships between system parameters and subjective preferences. We transform critiques, such as "it was too easy/too challenging", into censored intervals and analyze them using interval regression. Collective criticism has several advantages over other approaches: "too much/too little"-style feedback is intuitive for users and allows us to build predictive models for the optimal parameterization of the variables being critiqued. We present two studies where we model: These studies demonstrate the flexibility of our approach, and show that it produces robust results that are straightforward to interpret and inline with users' stated preferences.Peer reviewe

    ACII 2009: Affective Computing and Intelligent Interaction. Proceedings of the Doctoral Consortium 2009

    Get PDF
    • …
    corecore