3 research outputs found
Does AI do more harm than good? Assessing innovativeness and complaining intentions for successful and failed Mechanical and Feeling AI services
Shopping on amazon or booking holidays online are just two examples of AI-enabled services. Although of great practical importance, literature on AI-enabled services is scarce. Especially, no study exists that assesses the effect of the different AI levels of a service on its innovativeness. Moreover, empirical evidence on customer complaining behavior as a reaction to failed AI services is also missing. Using an online experiment (n=437), our paper strives to close this research gap. Our results show that customers will perceive a Feeling AI service (high degree of AI) as more innovative than a Mechanical AI service (low degree of AI). Moreover, customers using a failed Feeling AI service will complain more than customers using a failed Mechanical AI service. Finally, customers getting an AI Service Recovery will complain less than customers getting a Human Recovery. Our results highlight the importance of AI-level when creating and managing AI services
Recommended from our members
An Approach to Help End Users Become Aware of Privacy Risks in Home Automation
Smart home devices, such as voice assistants, smart lights, and smart video doorbells have become a part of end users' daily lives. Many of these devices combine their features with other services and smart devices to create a simple and efficient user experience. This is partly because of the contribution of end-user programming platforms, like If This Then That (IFTTT). IFTTT provides trigger and action events to end users to connect two or more smart home devices via easy-to-create applets. However, these applets rarely highlight underlying risks related to confidentiality (leakage of sensitive information) or integrity (authorized access) violations. Prior work has shown that providing users with violation scenarios makes them aware of the risks associated with these specific applets. However, these works have not investigated if end users can identify potential risks from the applet descriptions and change their behavior. This thesis closes this gap by (1) presenting end users with “consequences” of using IFTTT applets to understand if end users could find the possible violations and their reasoning, and (2) evaluating whether end users’ behavior changes when applets are presented with the consequences. In this work, we conducted a user study with 20 participants to evaluate our approach of including consequences in applet description. Our results show that adding potential consequences into the basic IFTTT applet description can help end users discover integrity and confidentiality violations and related factors impacting their applet usage decisions. Finally, we suggest a framework to automatically nudge end users when they want to use applets through end-user programming platforms. In this way, end users can have a comprehensive understanding of using applets in different contexts