3 research outputs found

    (Why) Do We Trust AI?: A Case of AI-based Health Chatbots

    Get PDF
    Automated chatbots powered by artificial intelligence (AI) can act as a ubiquitous point of contact, improving access to healthcare and empowering users to make effective decisions. However, despite the potential benefits, emerging literature suggests that apprehensions linked to the distinctive features of AI technology and the specific context of use (healthcare) could undermine consumer trust and hinder widespread adoption. Although the role of trust is considered pivotal to the acceptance of healthcare technologies, a dearth of research exists that focuses on the contextual factors that drive trust in such AI-based Chatbots for Self-Diagnosis (AICSD). Accordingly, a contextual model based on the trust-in-technology framework was developed to understand the determinants of consumers’ trust in AICSD and its behavioral consequences. It was validated using a free simulation experiment study in India (N = 202). Perceived anthropomorphism, perceived information quality, perceived explainability, disposition to trust technology, and perceived service quality influence consumers’ trust in AICSD. In turn, trust, privacy risk, health risk, and gender determine the intention to use. The research contributes by developing and validating a context-specific model for explaining trust in AICSD that could aid developers and marketers in enhancing consumers’ trust in and adoption of AICSD

    Following the Robot – Investigating the Utilization and the Acceptance of AI-based Services

    Get PDF
    In the past few years, there has been significant progress in the field of artificial intelligence (AI), with advancements in areas such as natural language processing and machine learning. AI systems are now being used in various industries and applications, from healthcare to finance, and are becoming more sophisticated and capable of handling complex tasks. The technology has the potential to assist in both private and professional decision-making. However, there are still challenges to be addressed, such as ensuring transparency and accountability in AI decision-making processes and addressing issues related to bias and ethics, and it is not yet certain whether all of these newly developed AI-based services will be accepted and used. This thesis addresses a research gap in the field of AI-based services by exploring the acceptance and utilization of such services from both individual and organizational perspectives. The research examines various factors that influence the acceptance of AI-based services and investigates users' perceptions of these services. The thesis poses four research questions, including identifying the differences in utilizing AI-based services compared to human-based services for decision-making, identifying characteristics of acceptance and utilization across different user groups, prioritizing methods for promoting trust in AI-based services, and exploring the impact of AI-based services on an organization's knowledge. To achieve this, the study employs various research methods such as surveys, experiments, interviews, and simulations within five research papers. Research focused on an organization that offers robo-advisors as an AI-based service, specifically a financial robo-advisor. This research paper measures advice-taking behavior in the interaction with robo-advisors based on the judge-advisor system and task-technology fit frameworks. The results show the advice of robo-advisors is followed more than that of human advisors and this behavior is reflected in the task-advisor fit. Interestingly, the advisor's perceived expertise is the most influential factor in the task-advisor fit for both robo-advisors and human advisors. However, integrity is only significant for human advisors, while the user's perception of the ability to make decisions efficiently is only significant for robo-advisors. Research paper B examined the differences in advice utilization between AI-based and human advisors and explored the relationship between task, advisor, and advice utilization using the task-advisor fit just like research paper A but in context the of a guessing game. The research paper analyzed the impact of advice similarity on utilization. The results indicated that judges tend to use advice from AI-based advisors more than human advisors when the advice is similar to their own estimation. When the advice is vastly different from their estimation, the utilization rate is equal for both AI-based and human advisors. Research paper C investigated the different needs of user groups in the context of health chatbots. The increasing number of aging individuals who require considerable medical attention could be addressed by health chatbots capable of identifying diseases based on symptoms. Existing chatbot applications are primarily used by younger generations. This research paper investigated the factors affecting the adoption of health chatbots by older people and the extended Unified Theory of Acceptance and Use of Technology. To investigate how to promote AI-based services such as robo-advisors, research paper D evaluated the effectiveness of eleven measures to increase trust in AI-based advisory systems and found that noncommittal testing was the most effective while implementing human traits had negligible effects. Additionally, the relative advantage of AI-based advising over that of human experts was measured in the context of financial planning. The results suggest that convenience is the most important advantage perceived by users. To analyze the impact of AI-based services on an organization's knowledge state, research paper E explored how organizations can effectively coordinate human and machine learning (ML). The results showed that ML can decrease an organization's need for humans’ explorative learning. The findings demonstrated that adjustments made by humans to ML systems are often beneficial but can become harmful under certain conditions. Additionally, relying on knowledge created by ML systems can facilitate organizational learning in turbulent environments, but it requires significant initial setup and coordination with humans. These findings offer new perspectives on organizational learning with ML and can guide organizations in optimizing resources for effective learning. In summary, the findings suggest that the acceptance and utilization of AI-based services can be influenced by the fit between the task and the service. However, organizations must carefully consider the user market and prioritize mechanisms to increase acceptance. Additionally, the implementation of AI-based services can positively affect an organization's ability to choose learning strategies or navigate turbulent environments, but it is crucial for humans to maintain domain knowledge of the task to reconfigure such services. This thesis enhances our understanding of the acceptance and utilization of AI-based services and provides valuable insights on how organizations can increase customers’ acceptance and usage of their AI-based services as well as implement and use AI-based services effectively
    corecore