168 research outputs found
Interaction design guidelines on critiquing-based recommender systems
A critiquing-based recommender system acts like an artificial salesperson. It engages users in a conversational dialog where users can provide feedback in the form of critiques to the sample items that were shown to them. The feedback, in turn, enables the system to refine its understanding of the user's preferences and prediction of what the user truly wants. The system is then able to recommend products that may better stimulate the user's interest in the next interaction cycle. In this paper, we report our extensive investigation of comparing various approaches in devising critiquing opportunities designed in these recommender systems. More specifically, we have investigated two major design elements which are necessary for a critiquing-based recommender system: critiquing coverageāone vs. multiple items that are returned during each recommendation cycle to be critiqued; and critiquing aidāsystem-suggested critiques (i.e., a set of critique suggestions for users to select) vs. user-initiated critiquing facility (i.e., facilitating users to create critiques on their own). Through a series of three user trials, we have measured how real-users reacted to systems with varied setups of the two elements. In particular, it was found that giving users the choice of critiquing one of multiple items (as opposed to just one) has significantly positive impacts on increasing users' decision accuracy (particularly in the first recommendation cycle) and saving their objective effort (in the later critiquing cycles). As for critiquing aids, the hybrid design with both system-suggested critiques and user-initiated critiquing support exhibits the best performance in inspiring users' decision confidence and increasing their intention to return, in comparison with the uncombined exclusive approaches. Therefore, the results from our studies shed light on the design guidelines for determining the sweetspot balancing user initiative and system support in the development of an effective and user-centric critiquing-based recommender syste
Hybrid Critiquing Recommender System Menggunakan Incremental Critiquing dan Example Critiquing
ABSTRAKSI: Dalam aplikasi e-commerce, sering dijumpai user yang memiliki pengetahuan terbatas akan produk yang ingin dibelinya. Untuk itulah diperlukan virtual sales assistants yang dapat membantu user menemukan produk yang sesuai dengan keinginannya.Aplikasi e-commerce memandang recommender system sebagai langkah penting untuk membangun virtual sales assistants yang lebih proaktif dan handal, terutama conversational recommender system. Conversational recommender system membantu user menemukan produk yang diinginkan dan mendapatkan umpan balik dari user. Salah satu bentuk umpan balik yang sering digunakan dalam recommender system adalah critiquing.Pada Tugas Akhir ini akan dibuat aplikasi sistem perekomendasi menggunakan Example Critiquing dan Incremental Critiquing. Pada Example Critiquing ini, kritik murni dibuat oleh user. Sedangkan pada Incremental critiquing kritik dibangun oleh sistem (compound critique) dan sistem akan menyimpan kritik sebelumnya (critique history) untuk digunakan lagi di siklus selanjutnya.Sistem ini akan dievaluasi menggunakan parameter recommendation effort dan recommendation accuracy. Untuk proses evaluasinya dilakukan dengan metode user study yaitu menggunakan sekumpulan orang untuk menguji sistem ini.Recommendation accuracy yang diperoleh untuk metode Hybrid adalah 86,41% sedangkan pada metode EC diperoleh 61,54% . Rata-rata jumlah interaksi yang diperlukan untuk menemukan produk yang diinginkan (Recommendation effort) pada penelitian ini adalah 2,75 siklus(8,9469 menit) pada metode Hybrid Critiquing sedangkan pada metode EC 2,11 siklus(7,3069 menit).Kata Kunci : recommender system, example critiquing, incremental critiquing, unit critique, compound critique,user studyABSTRACT: In e-commerce applications, often encountered users who have limited knowledge of the products they want to buy. For that needed a virtual sales assistants that can help users find products that fit with his desire. Application of e-commerce view Recommender systems as an important step to build a virtual sales assistants are more proactive and reliable, especially conversational Recommender systems. Conversational Recommender systems help users find desired products and get feedback from users. One form of feedback that is often used in Recommender systems are critiquing.In this Final Project will created recommender system using Example critiquing method and Incremental critiquing methos. In Example critiquing method, a pure critiques made by the use purely. While on Incremental critiquing method, critiques was built by the system (compound Critique) and the system will store the previous critique (Critique history) to be used again in the next cycle.This system evaluated using recommendation effort and recommendation accuracy. For the evaluation process conducted in a user study that is using a bunch of people to test this system.Recommendation accuracy obtained for the hybrid method is 86.41% whereas the EC method is obtained 61.54%. The average number of interactions required to find the desired product (Recommendation effort) in this study was 2.75 cycles (8.9469 minutes) on the Hybrid method, while critiquing the EC method of 2.11 cycles (7.3069 minutes).Keyword: recommender system, example critiquing, incremental critiquing, unit critique, compound critique,user stud
Data-driven decision making in Critique-based recommenders: from a critique to social media data
In the last decade there have been a large number of proposals in the field of Critique-based Recommenders. Critique-based recommenders are data-driven in their nature sincethey use a conversational cyclical recommendation process to elicit user feedback. In theliterature, the proposals made differ mainly in two aspects: in the source of data and in howthis data is analyzed to extract knowledge for providing users with recommendations. Inthis paper, we propose new algorithms that address these two aspects. Firstly, we propose anew algorithm, called HOR, which integrates several data sources, such as current user pref-erences (i.e., a critique), product descriptions, previous critiquing sessions by other users,and users' opinions expressed as ratings on social media web sites. Secondly, we propose adding compatibility and weighting scores to turn user behavior into knowledge to HOR and a previous state-of-the-art approach named HGR to help both algorithms make smarter recommendations. We have evaluated our proposals in two ways: with a simulator and withreal users. A comparison of our proposals with state-of-the-art approaches shows that thenew recommendation algorithms significantly outperform previous ones
A Cognitively Inspired Clustering Approach for Critique-Based Recommenders
The purpose of recommender systems is to support humans in the purchasing decision-making process. Decision-making is a human activity based on cognitive information. In the field of recommender systems, critiquing has been widely applied as an effective approach for obtaining users' feedback on recommended products. In the last decade, there have been a large number of proposals in the field of critique-based recommenders. These proposals mainly differ in two aspects: in the source of data and in how it is mined to provide the user with recommendations. To date, no approach has mined data using an adaptive clustering algorithm to increase the recommender's performance. In this paper, we describe how we added a clustering process to a critique-based recommender, thereby adapting the recommendation process and how we defined a cognitive user preference model based on the preferences (i.e., defined by critiques) received by the user. We have developed several proposals based on clustering, whose acronyms are MCP, CUM, CUM-I, and HGR-CUM-I. We compare our proposals with two well-known state-of-the-art approaches: incremental critiquing (IC) and history-guided recommendation (HGR). The results of our experiments showed that using clustering in a critique-based recommender leads to an improvement in their recommendation efficiency, since all the proposals outperform the baseline IC algorithm. Moreover, the performance of the best proposal, HGR-CUM-I, is significantly superior to both the IC and HGR algorithms. Our results indicate that introducing clustering into the critique-based recommender is an appealing option since it enhances overall efficiency, especially with a large data set
Evaluating the effectiveness of explanations for recommender systems : Methodological issues and empirical studies on the impact of personalization
Peer reviewedPostprin
Evaluating Conversational Recommender Systems: A Landscape of Research
Conversational recommender systems aim to interactively support online users
in their information search and decision-making processes in an intuitive way.
With the latest advances in voice-controlled devices, natural language
processing, and AI in general, such systems received increased attention in
recent years. Technically, conversational recommenders are usually complex
multi-component applications and often consist of multiple machine learning
models and a natural language user interface. Evaluating such a complex system
in a holistic way can therefore be challenging, as it requires (i) the
assessment of the quality of the different learning components, and (ii) the
quality perception of the system as a whole by users. Thus, a mixed methods
approach is often required, which may combine objective (computational) and
subjective (perception-oriented) evaluation techniques. In this paper, we
review common evaluation approaches for conversational recommender systems,
identify possible limitations, and outline future directions towards more
holistic evaluation practices
Evaluating product search and recommender systems for E-commerce environments
Online systems that help users select the most preferential item from a large electronic catalog are known as product search and recommender systems. Evaluation of various proposed technologies is essential for further development in this area. This paper describes the design and implementation of two user studies in which a particular product search tool, known as example critiquing, was evaluated against a chosen baseline model. The results confirm that example critiquing significantly reduces users' task time and error rate while increasing decision accuracy. Additionally, the results of the second user study show that a particular implementation of example critiquing also made users more confident about their choices. The main contribution is that through these two user studies, an evaluation framework of three criteria was successfully identified, which can be used for evaluating general product search and recommender systems in E-commerce environments. These two experiments and the actual procedures also shed light on some of the most important issues which need to be considered for evaluating such tools, such as the preparation of materials for evaluation, user task design, the context of evaluation, the criteria, the measures and the methodology of result analyse
Interaction design guidelines on critiquing-based recommender systems
A critiquing-based recommender system acts like an artificial salesperson. It engages users in a conversational dialog where users can provide feedback in the form of critiques to the sample items that were shown to them. The feedback, in turn, enables the system to refine its understanding of the user's preferences and prediction of what the user truly wants. The system is then able to recommend products that may better stimulate the user's interest in the next interaction cycle. In this paper, we report our extensive investigation of comparing various approaches in devising critiquing opportunities designed in these recommender systems. More specifically, we have investigated two major design elements which are necessary for a critiquing-based recommender system: critiquing coverage-one vs. multiple items that are returned during each recommendation cycle to be critiqued; and critiquing aid-system-suggested critiques (i.e., a set of critique suggestions for users to select) vs. user-initiated critiquing facility (i.e., facilitating users to create critiques on their own). Through a series of three user trials, we have measured how real-users reacted to systems with varied setups of the two elements. In particular, it was found that giving users the choice of critiquing one of multiple items (as opposed to just one) has significantly positive impacts on increasing users' decision accuracy (particularly in the first recommendation cycle) and saving their objective effort (in the later critiquing cycles). As for critiquing aids, the hybrid design with both system-suggested critiques and user-initiated critiquing support exhibits the best performance in inspiring users' decision confidence and increasing their intention to return, in comparison with the uncombined exclusive approaches. Therefore, the results from our studies shed light on the design guidelines for determining the sweetspot balancing user initiative and system support in the development of an effective and user-centric critiquing-based recommender system
Controllable Recommenders using Deep Generative Models and Disentanglement
In this paper, we consider controllability as a means to satisfy dynamic
preferences of users, enabling them to control recommendations such that their
current preference is met. While deep models have shown improved performance
for collaborative filtering, they are generally not amenable to fine grained
control by a user, leading to the development of methods like deep language
critiquing. We propose an alternate view, where instead of keyphrase based
critiques, a user is provided 'knobs' in a disentangled latent space, with each
knob corresponding to an item aspect. Disentanglement here refers to a latent
space where generative factors (here, a preference towards an item category
like genre) are captured independently in their respective dimensions, thereby
enabling predictable manipulations, otherwise not possible in an entangled
space. We propose using a (semi-)supervised disentanglement objective for this
purpose, as well as multiple metrics to evaluate the controllability and the
degree of personalization of controlled recommendations. We show that by
updating the disentangled latent space based on user feedback, and by
exploiting the generative nature of the recommender, controlled and
personalized recommendations can be produced. Through experiments on two widely
used collaborative filtering datasets, we demonstrate that a controllable
recommender can be trained with a slight reduction in recommender performance,
provided enough supervision is provided. The recommendations produced by these
models appear to both conform to a user's current preference and remain
personalized.Comment: 10 pages, 1 figur
- ā¦