Understanding the Role of Interactivity and Explanation in Adaptive Experiences

Abstract

Adaptive experiences have been an active area of research in the past few decades, accompanied by advances in technology such as machine learning and artificial intelligence. Whether the currently ongoing research on adaptive experiences has focused on personalization algorithms, explainability, user engagement, or privacy and security, there is growing interest and resources in developing and improving these research focuses. Even though the research on adaptive experiences has been dynamic and rapidly evolving, achieving a high level of user engagement in adaptive experiences remains a challenge. %????? This dissertation aims to uncover ways to engage users in adaptive experiences by incorporating interactivity and explanation through four studies. Study I takes the first step to link the explanation and interactivity in machine learning systems to facilitate users\u27 engagement with the underlying machine learning model with the Tic-Tac-Toe game as a use case. The results show that explainable machine learning (XML) systems (and arguably XAI systems in general) indeed benefit from mechanisms that allow users to interact with the system\u27s internal decision rules. Study II, III, and IV further focus on adaptive experiences in recommender systems in specific, exploring the role of interactivity and explanation to keep the user “in-the-loop” in recommender systems, trying to mitigate the ``filter bubble\u27\u27 problem and help users in self-actualizing by supporting them in exploring and understanding their unique tastes. Study II investigates the effect of recommendation source (a human expert vs. an AI algorithm) and justification method (needs-based vs. interest-based justification) on professional development recommendations in a scenario-based study setting. The results show an interaction effect between these two system aspects: users who are told that the recommendations are based on their interests have a better experience when the recommendations are presented as originating from an AI algorithm, while users who are told that the recommendations are based on their needs have a better experience when the recommendations are presented as originating from a human expert. This work implies that while building the proposed novel movie recommender system covered in study IV, it would provide a better user experience if the movie recommendations are presented as originating from algorithms rather than from a human expert considering that movie preferences (which will be visualized by the movies\u27 emotion feature) are usually based on users\u27 interest. Study III explores the effects of four novel alternative recommendation lists on participants’ perceptions of recommendations and their satisfaction with the system. The four novel alternative recommendation lists (RSSA features) which have the potential to go beyond the traditional top N recommendations provide transparency from a different level --- how much else does the system learn about users beyond the traditional top N recommendations, which in turn enable users to interact with these alternative lists by rating the initial recommendations so as to correct or confirm the system\u27s estimates of the alternative recommendations. The subjective evaluation and behavioral analysis demonstrate that the proposed RSSA features had a significant effect on the user experience, surprisingly, two of the four RSSA features (the controversial and hate features) perform worse than the traditional top-N recommendations on the measured subjective dependent variables while the other two RSSA features (the hipster and no clue items) perform equally well and even slightly better than the traditional top-N (but this effect is not statistically significant). Moreover, the results indicate that individual differences, such as the need for novelty and domain knowledge, play a significant role in users’ perception of and interaction with the system. Study IV further combines diversification, visualization, and interactivity, aiming to encourage users to be more engaged with the system. The results show that introducing emotion as an item feature into recommender systems does help in personalization and individual taste exploration; these benefits are greatly optimized through the mechanisms that diversify recommendations by emotional signature, visualize recommendations on the emotional signature, and allow users to directly interact with the system by tweaking their tastes, which further contributes to both user experience and self-actualization. This work has practical implications for designing adaptive experiences. Explanation solutions in adaptive experiences might not always lead to a positive user experience, it highly depends on the application domain and the context (as studied in all four studies); it is essential to carefully investigate a specific explanation solution in combination with other design elements in different fields. Introducing control by allowing for direct interactivity (vs. indirect interactivity) in adaptive systems and providing feedback to users\u27 input by integrating their input into the algorithms would create a more engaging and interactive user experience (as studied in Study I and IV). And cumulatively, appropriate direct interaction with the system along with deliberate and thoughtful designs of explanation (including visualization design with the application environment fully considered), which are able to arouse user reflection or resonance, would potentially promote both user experience and user self-actualization

    Similar works