402 research outputs found

    Is Work System Theory a Practical Theory of Practice?

    Get PDF
    This paper describes an exploration of whether ideas related to pragmatism, practical theory, and practice theory provide potentially useful directions for extending work system theory (WST), which is an outgrowth of an attempt to develop the work system method (WSM), a flexible systems analysis method for business professionals. After summarizing WST’s basic premises and its two central frameworks, this paper uses a positioning map to explain reasons for considering relationships between WST and a number of topics related to practical issues and practice theory. Based on that positioning map, the subsequent sections discuss relation-ships between WST and UML, Goldkuhl’s workpractice theory, and the more general notion of practice theory. A concluding section briefly addresses a set of questions related to whether WST is a practical theory of practice. This paper\u27s comparisons of WST with the three theoret-ical perspectives for describing and understanding systems could be a step toward greater practical application of IS research related to the nature and evolution of activities, processes, routines, and practices involving the use of technology in organizational settings

    A Computational Model of Non-Cooperation in Natural Language Dialogue

    Get PDF
    A common assumption in the study of conversation is that participants fully cooperate in order to maximise the effectiveness of the exchange and ensure communication flow. This assumption persists even in situations in which the private goals of the participants are at odds: they may act strategically pursuing their agendas, but will still adhere to a number of linguistic norms or conventions which are implicitly accepted by a community of language users. However, in naturally occurring dialogue participants often depart from such norms, for instance, by asking inappropriate questions, by avoiding to provide adequate answers or by volunteering information that is not relevant to the conversation. These are examples of what we call linguistic non-cooperation. This thesis presents a systematic investigation of linguistic non-cooperation in dialogue. Given a specific activity, in a specific cultural context and time, the method proceeds by making explicit which linguistic behaviours are appropriate. This results in a set of rules: the global dialogue game. Non-cooperation is then measured as instances in which the actions of the participants are not in accordance with these rules. The dialogue game is formally defined in terms of discourse obligations. These are actions that participants are expected to perform at a given point in the dialogue based on the dialogue history. In this context, non-cooperation amounts to participants failing to act according to their obligations. We propose a general definition of linguistic non-cooperation and give a specific instance for political interview dialogues. Based on the latter, we present an empirical method which involves a coding scheme for the manual annotation of interview transcripts. The degree to which each participant cooperates is automatically determined by contrasting the annotated transcripts with the rules in the dialogue game for political interviews. The approach is evaluated on a corpus of broadcast political interviews and tested for correlation with human judgement on the same corpus. Further, we describe a model of conversational agents that incorporates the concepts and mechanisms above as part of their dialogue manager. This allows for the generation of conversations in which the agents exhibit varying degrees of cooperation by controlling how often they favour their private goals instead of discharging their discourse obligations

    An interdisciplinary concept for human-centered explainable artificial intelligence - Investigating the impact of explainable AI on end-users

    Get PDF
    Since the 1950s, Artificial Intelligence (AI) applications have captivated people. However, this fascination has always been accompanied by disillusionment about the limitations of this technology. Today, machine learning methods such as Deep Neural Networks (DNN) are successfully used in various tasks. However, these methods also have limitations: Their complexity makes their decisions no longer comprehensible to humans - they are black-boxes. The research branch of Explainable AI (XAI) has addressed this problem by investigating how to make AI decisions comprehensible. This desire is not new. In the 1970s, developers of intrinsic explainable AI approaches, so-called white-boxes (e.g., rule-based systems), were dealing with AI explanations. Nowadays, with the increased use of AI systems in all areas of life, the design of comprehensible systems has become increasingly important. Developing such systems is part of Human-Centred AI (HCAI) research, which integrates human needs and abilities in the design of AI interfaces. For this, an understanding is needed of how humans perceive XAI and how AI explanations influence the interaction between humans and AI. One of the open questions concerns the investigation of XAI for end-users, i.e., people who have no expertise in AI but interact with such systems or are impacted by the system's decisions. This dissertation investigates the impact of different levels of interactive XAI of white- and black-box AI systems on end-users perceptions. Based on an interdisciplinary concept presented in this work, it is examined how the content, type, and interface of explanations of DNN (black box) and rule-based systems (white box) are perceived by end-users. How XAI influences end-users mental models, trust, self-efficacy, cognitive workload, and emotional state regarding the AI system is the centre of the investigation. At the beginning of the dissertation, general concepts regarding AI, explanations, and psychological constructs of mental models, trust, self-efficacy, cognitive load, and emotions are introduced. Subsequently, related work regarding the design and investigation of XAI for users is presented. This serves as a basis for the concept of a Human-Centered Explainable AI (HC-XAI) presented in this dissertation, which combines an XAI design approach with user evaluations. The author pursues an interdisciplinary approach that integrates knowledge from the research areas of (X)AI, Human-Computer Interaction, and Psychology. Based on this interdisciplinary concept, a five-step approach is derived and applied to illustrative surveys and experiments in the empirical part of this dissertation. To illustrate the first two steps, a persona approach for HC-XAI is presented, and based on that, a template for designing personas is provided. To illustrate the usage of the template, three surveys are presented that ask end-users about their attitudes and expectations towards AI and XAI. The personas generated from the survey data indicate that end-users often lack knowledge of XAI and that their perception of it depends on demographic and personality-related characteristics. Steps three to five deal with the design of XAI for concrete applications. For this, different levels of interactive XAI are presented and investigated in experiments with end-users. For this purpose, two rule-based systems (i.e., white-box) and four systems based on DNN (i.e., black-box) are used. These are applied for three purposes: Cooperation & collaboration, education, and medical decision support. Six user studies were conducted for this purpose, which differed in the interactivity of the XAI system used. The results show that end-users trust and mental models of AI depend strongly on the context of use and the design of the explanation itself. For example, explanations that a virtual agent mediates are shown to promote trust. The content and type of explanations are also perceived differently by users. The studies also show that end-users in different application contexts of XAI feel the desire for interactive explanations. The dissertation concludes with a summary of the scientific contribution, points out limitations of the presented work, and gives an outlook on possible future research topics to integrate explanations into everyday AI systems and thus enable the comprehensible handling of AI for all people.Seit den 1950er Jahren haben Anwendungen der Künstlichen Intelligenz (KI) die Menschen in ihren Bann gezogen. Diese Faszination wurde jedoch stets von Ernüchterung über die Grenzen dieser Technologie begleitet. Heute werden Methoden des maschinellen Lernens wie Deep Neural Networks (DNN) erfolgreich für verschiedene Aufgaben eingesetzt. Doch auch diese Methoden haben ihre Grenzen: Durch ihre Komplexität sind ihre Entscheidungen für den Menschen nicht mehr nachvollziehbar - sie sind Black-Boxes. Der Forschungszweig der Erklärbaren KI (engl. XAI) hat sich diesem Problem angenommen und untersucht, wie man KI-Entscheidungen nachvollziehbar machen kann. Dieser Wunsch ist nicht neu. In den 1970er Jahren beschäftigten sich die Entwickler von intrinsisch erklärbaren KI-Ansätzen, so genannten White-Boxes (z. B. regelbasierte Systeme), mit KI-Erklärungen. Heutzutage, mit dem zunehmenden Einsatz von KI-Systemen in allen Lebensbereichen, wird die Gestaltung nachvollziehbarer Systeme immer wichtiger. Die Entwicklung solcher Systeme ist Teil der Menschzentrierten KI (engl. HCAI) Forschung, die menschliche Bedürfnisse und Fähigkeiten in die Gestaltung von KI-Schnittstellen integriert. Dafür ist ein Verständnis darüber erforderlich, wie Menschen XAI wahrnehmen und wie KI-Erklärungen die Interaktion zwischen Mensch und KI beeinflussen. Eine der offenen Fragen betrifft die Untersuchung von XAI für Endnutzer, d.h. Menschen, die keine Expertise in KI haben, aber mit solchen Systemen interagieren oder von deren Entscheidungen betroffen sind. In dieser Dissertation wird untersucht, wie sich verschiedene Stufen interaktiver XAI von White- und Black-Box-KI-Systemen auf die Wahrnehmung der Endnutzer auswirken. Basierend auf einem interdisziplinären Konzept, das in dieser Arbeit vorgestellt wird, wird untersucht, wie der Inhalt, die Art und die Schnittstelle von Erklärungen von DNN (Black-Box) und regelbasierten Systemen (White-Box) von Endnutzern wahrgenommen werden. Wie XAI die mentalen Modelle, das Vertrauen, die Selbstwirksamkeit, die kognitive Belastung und den emotionalen Zustand der Endnutzer in Bezug auf das KI-System beeinflusst, steht im Mittelpunkt der Untersuchung. Zu Beginn der Arbeit werden allgemeine Konzepte zu KI, Erklärungen und psychologische Konstrukte von mentalen Modellen, Vertrauen, Selbstwirksamkeit, kognitiver Belastung und Emotionen vorgestellt. Anschließend werden verwandte Arbeiten bezüglich dem Design und der Untersuchung von XAI für Nutzer präsentiert. Diese dienen als Grundlage für das in dieser Dissertation vorgestellte Konzept einer Menschzentrierten Erklärbaren KI (engl. HC-XAI), das einen XAI-Designansatz mit Nutzerevaluationen kombiniert. Die Autorin verfolgt einen interdisziplinären Ansatz, der Wissen aus den Forschungsbereichen (X)AI, Mensch-Computer-Interaktion und Psychologie integriert. Auf der Grundlage dieses interdisziplinären Konzepts wird ein fünfstufiger Ansatz abgeleitet und im empirischen Teil dieser Arbeit auf exemplarische Umfragen und Experimente und angewendet. Zur Veranschaulichung der ersten beiden Schritte wird ein Persona-Ansatz für HC-XAI vorgestellt und darauf aufbauend eine Vorlage für den Entwurf von Personas bereitgestellt. Um die Verwendung der Vorlage zu veranschaulichen, werden drei Umfragen präsentiert, in denen Endnutzer zu ihren Einstellungen und Erwartungen gegenüber KI und XAI befragt werden. Die aus den Umfragedaten generierten Personas zeigen, dass es den Endnutzern oft an Wissen über XAI mangelt und dass ihre Wahrnehmung dessen von demografischen und persönlichkeitsbezogenen Merkmalen abhängt. Die Schritte drei bis fünf befassen sich mit der Gestaltung von XAI für konkrete Anwendungen. Hierzu werden verschiedene Stufen interaktiver XAI vorgestellt und in Experimenten mit Endanwendern untersucht. Zu diesem Zweck werden zwei regelbasierte Systeme (White-Box) und vier auf DNN basierende Systeme (Black-Box) verwendet. Diese werden für drei Zwecke eingesetzt: Kooperation & Kollaboration, Bildung und medizinische Entscheidungsunterstützung. Hierzu wurden sechs Nutzerstudien durchgeführt, die sich in der Interaktivität des verwendeten XAI-Systems unterschieden. Die Ergebnisse zeigen, dass das Vertrauen und die mentalen Modelle der Endnutzer in KI stark vom Nutzungskontext und der Gestaltung der Erklärung selbst abhängen. Es hat sich beispielsweise gezeigt, dass Erklärungen, die von einem virtuellen Agenten vermittelt werden, das Vertrauen fördern. Auch der Inhalt und die Art der Erklärungen werden von den Nutzern unterschiedlich wahrgenommen. Die Studien zeigen zudem, dass Endnutzer in unterschiedlichen Anwendungskontexten von XAI den Wunsch nach interaktiven Erklärungen verspüren. Die Dissertation schließt mit einer Zusammenfassung des wissenschaftlichen Beitrags, weist auf Grenzen der vorgestellten Arbeit hin und gibt einen Ausblick auf mögliche zukünftige Forschungsthemen, um Erklärungen in alltägliche KI-Systeme zu integrieren und damit den verständlichen Umgang mit KI für alle Menschen zu ermöglichen

    Understanding Collaborative Sensemaking for System Design — An Investigation of Musicians\u27 Practice

    Get PDF
    There is surprisingly little written in information science and technology literature about the design of tools used to support the collaboration of creators. Understanding collaborative sensemaking through the use of language has been traditionally applied to non-work domains, but this method is also well-suited for informing hypotheses about the design collaborative systems. The presence of ubiquitous, mobile technology, and development of multi-user virtual spaces invites investigation of design which is based on naturalistic, real world, creative group behaviors, including the collaborative work of musicians. This thesis is considering the co-construction of new (musical) knowledge by small groups. Co-construction of new knowledge is critical to the definition of an information system because it emphasizes coordination and resource sharing among group members (versus individual members independently doing their own tasks and only coming together to collate their contributions as a final product). This work situates the locus of creativity on the process itself, rather than on the output (the musical result) or the individuals (members of the band). This thesis describes a way to apply quantitative observations to inform qualitative assessment of the characteristics of collaborative sensemaking in groups. Conversational data were obtained from nine face-to-face collaborative composing sessions, involving three separate bands producing 18 hours of recorded interactions. Topical characteristics of the discussion, namely objects, plans, properties and performance; as well as emergent patterns of generative, evaluative, revision, and management conversational acts within the group were seen as indicative of knowledge construction. The findings report the use of collaborative pathways: iterative cycles of generation, evaluation and revision of temporary solutions used to move the collaboration forward. In addition, bracketing of temporary solutions served to help collaborators reuse content and offload attentional resources. Ambiguity in language, evaluation criteria, goal formation, and group awareness meant that existing knowledge representations were insufficient in making sense of incoming data and necessitated reformulating those representations. Further, strategic use of affective language was found to be instrumental in bridging knowledge gaps. Based on these findings, features of a collaborative system are proposed to help in facilitating sensemaking routines at various stages of a creative task. This research contributes to the theoretical understanding of collaborative sensemaking during non-work, creative activities in order to inform the design of systems for supporting these activities. By studying an environment which forms a potential microcosm of virtual interaction between groups, it provides a framework for understanding and automating collaborative discussion content in terms of the features of dialogue
    • …
    corecore