6 research outputs found

    TOWARDS AN INTEGRATIVE THEORETICAL FRAMEWORK OF INTERACTIVE MACHINE LEARNING SYSTEMS

    Get PDF
    Interactive machine learning (IML) is a learning process in which a user interacts with a system to iteratively define and optimise a model. Although recent years have illustrated the proliferation of IML systems in the fields of Human-Computer Interaction (HCI), Information Systems (IS), and Computer Science (CS), current research results are scattered leading to a lack of integration of existing work on IML. Furthermore, due to diverging functionalities and purposes IML systems can refer to, an uncertainty exists regarding the underlying distinct capabilities that constitute this class of systems. By reviewing extensive IML literature, this paper suggests an integrative theoretical framework for IML systems to address these current impediments. Reviewing 2,879 studies in leading journals and conferences during the years 1966-2018, we found an extensive range of applications areas that have implemented IML systems and the necessity to standardise the evaluation of those systems. Our framework offers an essential step to provide a theoretical foundation to integrate concepts and findings across different fields of research. The main contribution of this paper is organising and structuring the body of knowledge in IML for the advancement of the field. Furthermore, we suggest three opportunities for future IML research. From a practical point of view, our integrative theoretical framework can serve as a reference guide to inform the design and implementation of IML systems

    Implementing an Intelligent Collaborative Agent as Teammate in Collaborative Writing: toward a Synergy of Humans and AI

    Get PDF
    This paper aims at implementing a hybrid form of group work through the incorporation of an intelligent collaborative agent into a Collaborative Writing process. With that it contributes to the overall research gap establishing acceptance of AI towards complementary hybrid work. To approach this aim, we follow a Design Science Research process. We identify requirements for the agent to be considered a teammate based on expert interviews in the light of Social Response Theory and the concept of the Uncanny Valley. Next, we derive design principles for the implementation of an agent as teammate from the collected requirements. For the evaluation of the design principles and the human teammates’ perception of the agent, we instantiate a Collaborative Writing process via a web-application incorporating the agent. The evaluation reveals the partly successful implementation of the developed design principles. Additionally, the results show the potential of hybrid collaboration teams accepting non-human teammates

    Toward a Hybrid Intelligence System in Customer Service: Collaborative Learning of Human and AI

    Get PDF
    Hybrid intelligence systems (HIS) enable human users and Artificial Intelligence (AI) to collaborate in activities complementing each other. They particularly allow the combination of human-in-the-loop and computer-in-the-loop learning ensuring a hybrid collaborative learning cycle. To design such a HIS, we implemented a prototype based on formulated design principles (DPs) to teach and learn from its human user while collaborating on a task. For implementation and evaluation, we selected a customer service use case as a top domain of research on AI applications. The prototype was evaluated with 31 expert and 30 novice customer service employees of an organization. We found that the prototype following the DPs successfully contributed to positive learning effects as well as a high continuance intention to use. The measured levels of satisfaction and continuance intention to use provide promising results to reuse our DPs and further develop our prototype for hybrid collaborative learning

    Artificial intelligence in information systems research: A systematic literature review and research agenda

    Get PDF
    AI has received increased attention from the information systems (IS) research community in recent years. There is, however, a growing concern that research on AI could experience a lack of cumulative building of knowledge, which has overshadowed IS research previously. This study addresses this concern, by conducting a systematic literature review of AI research in IS between 2005 and 2020. The search strategy resulted in 1877 studies, of which 98 were identified as primary studies and a synthesise of key themes that are pertinent to this study is presented. In doing so, this study makes important contributions, namely (i) an identification of the current reported business value and contributions of AI, (ii) research and practical implications on the use of AI and (iii) opportunities for future AI research in the form of a research agenda

    Visual Representation of Explainable Artificial Intelligence Methods: Design and Empirical Studies

    Get PDF
    Explainability is increasingly considered a critical component of artificial intelligence (AI) systems, especially in high-stake domains where AI systems’ decisions can significantly impact individuals. As a result, there has been a surge of interest in explainable artificial intelligence (XAI) to increase the transparency of AI systems by explaining their decisions to end-users. In particular, extensive research has focused on developing “local model-agnostic” explainable methods that generate explanations of individual predictions for any predictive model. While these explanations can support end-users in the use of AI systems through increased transparency, three significant challenges have hindered their design, implementation, and large-scale adoption in real applications. First, there is a lack of understanding of how end-users evaluate explanations. There are many critiques that explanations are based on researchers’ intuition instead of end-users’ needs. Furthermore, there is insufficient evidence on whether end-users understand these explanations or trust XAI systems. Second, it is unclear which effect explanations have on trust when they disclose different biases on AI systems’ decisions. Prior research investigating biased decisions has found conflicting evidence on explanations’ effects. Explanations can either increase trust through perceived transparency or decrease trust as end-users perceive the system as biased. Moreover, it is unclear how contingency factors influence these opposing effects. Third, most XAI methods deliver static explanations that offer end-users limited information, resulting in an insufficient understanding of how AI systems make decisions and, in turn, lower trust. Furthermore, research has found that end-users perceive static explanations as not transparent enough, as these do not allow them to investigate the factors that influence a given decision. This dissertation addresses these challenges across three studies by focusing on the overarching research question of how to design visual representations of local model-agnostic XAI methods to increase end-users’ understanding and trust. The first challenge is addressed through an iterative design process that refines the representations of explanations from four well-established model-agnostic XAI methods and a subsequent evaluation with end-users using eye-tracking technology and interviews. Afterward, a research study that takes a psychological contract violation (PCV) theory and social identity theory perspective to investigate the contingency factors of the opposing effects of explanations on end-users’ trust addresses the second challenge. Specifically, this study investigates how end-users evaluate explanations of a gender-biased AI system while controlling for their awareness of gender discrimination in society. Finally, the third challenge is addressed through a design science research project to design an interactive XAI system for end-users to increase their understanding and trust. This dissertation makes several contributions to the ongoing research on improving the transparency of AI systems by explicitly emphasizing the end-user perspective on XAI. First, it contributes to practice by providing insights that help to improve the design of explanations of AI systems’ decisions. Additionally, this dissertation provides significant theoretical contributions by contextualizing the PCV theory to gender-biased XAI systems and the contingency factors that determine whether end-users experience a PCV. Moreover, it provides insights into how end-users cognitively evaluate explanations and extends the current understanding of the impact of explanations on trust. Finally, this dissertation contributes to the design knowledge of XAI systems by proposing guidelines for designing interactive XAI systems that give end-users more control over the information they receive to help them better understand how AI systems make decisions

    Designing AI-Based Systems for Qualitative Data Collection and Analysis

    Get PDF
    With the continuously increasing impact of information systems (IS) on private and professional life, it has become crucial to integrate users in the IS development process. One of the critical reasons for failed IS projects is the inability to accurately meet user requirements, resulting from an incomplete or inaccurate collection of requirements during the requirements elicitation (RE) phase. While interviews are the most effective RE technique, they face several challenges that make them a questionable fit for the numerous, heterogeneous, and geographically distributed users of contemporary IS. Three significant challenges limit the involvement of a large number of users in IS development processes today. Firstly, there is a lack of tool support to conduct interviews with a wide audience. While initial studies show promising results in utilizing text-based conversational agents (chatbots) as interviewer substitutes, we lack design knowledge for designing AI-based chatbots that leverage established interviewing techniques in the context of RE. By successfully applying chatbot-based interviewing, vast amounts of qualitative data can be collected. Secondly, there is a need to provide tool support enabling the analysis of large amounts of qualitative interview data. Once again, while modern technologies, such as machine learning (ML), promise remedy, concrete implementations of automated analysis for unstructured qualitative data lag behind the promise. There is a need to design interactive ML (IML) systems for supporting the coding process of qualitative data, which centers around simple interaction formats to teach the ML system, and transparent and understandable suggestions to support data analysis. Thirdly, while organizations rely on online feedback to inform requirements without explicitly conducting RE interviews (e.g., from app stores), we know little about the demographics of who is giving feedback and what motivates them to do so. Using online feedback as requirement source risks including solely the concerns and desires of vocal user groups. With this thesis, I tackle these three challenges in two parts. In part I, I address the first and the second challenge by presenting and evaluating two innovative AI-based systems, a chatbot for requirements elicitation and an IML system to semi-automate qualitative coding. In part II, I address the third challenge by presenting results from a large-scale study on IS feedback engagement. With both parts, I contribute with prescriptive knowledge for designing AI-based qualitative data collection and analysis systems and help to establish a deeper understanding of the coverage of existing data collected from online sources. Besides providing concrete artifacts, architectures, and evaluations, I demonstrate the application of a chatbot interviewer to understand user values in smartphones and provide guidance for extending feedback coverage from underrepresented IS user groups
    corecore