66 research outputs found

    Evaluating humanhuman communication protocols with miscommunication generation and model checking

    Get PDF
    Abstract. Human-human communication is critical to safe operations in domains such as air transportation where airlines develop and train pilots on communication procedures with the goal to ensure that they check that verbal air traffic clearances are correctly heard and executed. Such communication protocols should be designed to be robust to miscommunication. However, they can fail in ways unanticipated by designers. In this work, we present a method for modeling human-human communication protocols using the Enhanced Operator Function Model with Communications (EOFMC), a task analytic modeling formalism that can be interpreted by a model checker. We describe how miscommunications can be generated from instantiated EOFMC models of human-human communication protocols. Using an air transportation example, we show how model checking can be used to evaluate if a given protocol will ensure successful communication. Avenues of future research are explored

    The Symmetry of Partner Modelling

    Get PDF
    © 2016, International Society of the Learning Sciences, Inc. Collaborative learning has often been associated with the construction of a shared understanding of the situation at hand. The psycholinguistics mechanisms at work while establishing common grounds are the object of scientific controversy. We postulate that collaborative tasks require some level of mutual modelling, i.e. that each partner needs some model of what the other partners know/want/intend at a given time. We use the term “some model” to stress the fact that this model is not necessarily detailed or complete, but that we acquire some representations of the persons we interact with. The question we address is: Does the quality of the partner model depend upon the modeler’s ability to represent his or her partner? Upon the modelee’s ability to make his state clear to the modeler? Or rather, upon the quality of their interactions? We address this question by comparing the respective accuracies of the models built by different team members. We report on 5 experiments on collaborative problem solving or collaborative learning that vary in terms of tasks (how important it is to build an accurate model) and settings (how difficult it is to build an accurate model). In 4 studies, the accuracy of the model that A built about B was correlated with the accuracy of the model that B built about A, which seems to imply that the quality of interactions matters more than individual abilities when building mutual models. However, these findings do not rule out the fact that individual abilities also contribute to the quality of modelling process

    Task, Usability, and Error Analyses of Ambulance-based Telemedicine for Stroke Care

    Get PDF
    Past research has established that telemedicine improves stroke care through decreased time to treatment and more accurate diagnoses. The goals of this study were to 1) study how clinicians complete stroke assessment using a telemedicine system integrated in ambulances, 2) determine potential errors and usability issues when using the system, and 3) develop recommendations to mitigate these issues. This study investigated use of a telemedicine platform to evaluate a stroke patient in an ambulance with a geographically distributed caregiving team comprised of a paramedic, nurse, and neurologist. It first determined the tasks involved based on 13 observations of a simulated stroke using 39 care providers. Based on these observational studies, a Hierarchical Task Analysis (HTA) was developed, and subsequently, a heuristic evaluation was conducted to determine the usability issues in the interface of the telemedicine system. This was followed by a Systematic Human Error Reduction and Prediction Approach (SHERPA) to determine the possibility of human error while providing care using the telemedicine work system. The results from the HTA included 6 primary subgoals categorizing the 97 tasks to complete the stroke evaluation. The heuristic evaluation found 123 unique violations to heuristics, with an average severity of 2.38. One hundred and thirty-one potential human errors were found with SHERPA, the two most common being miscommunication and selecting an incorrect option. Several recommendations are proposed, including improvement of labeling, consistent formatting, rigid or suggested formatting for data input, automation of task structure and camera movement, and audio/visual improvements to support communication

    Audiovisual prosody in interaction

    Get PDF

    Proceedings of the Second FAROS Public Workshop, 30th September 2014, Espoo, Finland

    Get PDF
    FAROS is an EC FP7 funded, three year project to develop an approach to incorporate human factors into Risk-Based Design of ships. The project consortium consists of 12 members including industry, academia and research institutes. The second FAROS Public Workshop was held in Dipoli Congress Centre in Otaniemi, Espoo, Finland, on the 30th of September 2014. The workshop included keynotes from industry, papers on risk models for aspects such as collision and grounding, fire and the human element, descriptions of parametric ship models and the overall approach being adopted in the FAROS project

    An interdisciplinary concept for human-centered explainable artificial intelligence - Investigating the impact of explainable AI on end-users

    Get PDF
    Since the 1950s, Artificial Intelligence (AI) applications have captivated people. However, this fascination has always been accompanied by disillusionment about the limitations of this technology. Today, machine learning methods such as Deep Neural Networks (DNN) are successfully used in various tasks. However, these methods also have limitations: Their complexity makes their decisions no longer comprehensible to humans - they are black-boxes. The research branch of Explainable AI (XAI) has addressed this problem by investigating how to make AI decisions comprehensible. This desire is not new. In the 1970s, developers of intrinsic explainable AI approaches, so-called white-boxes (e.g., rule-based systems), were dealing with AI explanations. Nowadays, with the increased use of AI systems in all areas of life, the design of comprehensible systems has become increasingly important. Developing such systems is part of Human-Centred AI (HCAI) research, which integrates human needs and abilities in the design of AI interfaces. For this, an understanding is needed of how humans perceive XAI and how AI explanations influence the interaction between humans and AI. One of the open questions concerns the investigation of XAI for end-users, i.e., people who have no expertise in AI but interact with such systems or are impacted by the system's decisions. This dissertation investigates the impact of different levels of interactive XAI of white- and black-box AI systems on end-users perceptions. Based on an interdisciplinary concept presented in this work, it is examined how the content, type, and interface of explanations of DNN (black box) and rule-based systems (white box) are perceived by end-users. How XAI influences end-users mental models, trust, self-efficacy, cognitive workload, and emotional state regarding the AI system is the centre of the investigation. At the beginning of the dissertation, general concepts regarding AI, explanations, and psychological constructs of mental models, trust, self-efficacy, cognitive load, and emotions are introduced. Subsequently, related work regarding the design and investigation of XAI for users is presented. This serves as a basis for the concept of a Human-Centered Explainable AI (HC-XAI) presented in this dissertation, which combines an XAI design approach with user evaluations. The author pursues an interdisciplinary approach that integrates knowledge from the research areas of (X)AI, Human-Computer Interaction, and Psychology. Based on this interdisciplinary concept, a five-step approach is derived and applied to illustrative surveys and experiments in the empirical part of this dissertation. To illustrate the first two steps, a persona approach for HC-XAI is presented, and based on that, a template for designing personas is provided. To illustrate the usage of the template, three surveys are presented that ask end-users about their attitudes and expectations towards AI and XAI. The personas generated from the survey data indicate that end-users often lack knowledge of XAI and that their perception of it depends on demographic and personality-related characteristics. Steps three to five deal with the design of XAI for concrete applications. For this, different levels of interactive XAI are presented and investigated in experiments with end-users. For this purpose, two rule-based systems (i.e., white-box) and four systems based on DNN (i.e., black-box) are used. These are applied for three purposes: Cooperation & collaboration, education, and medical decision support. Six user studies were conducted for this purpose, which differed in the interactivity of the XAI system used. The results show that end-users trust and mental models of AI depend strongly on the context of use and the design of the explanation itself. For example, explanations that a virtual agent mediates are shown to promote trust. The content and type of explanations are also perceived differently by users. The studies also show that end-users in different application contexts of XAI feel the desire for interactive explanations. The dissertation concludes with a summary of the scientific contribution, points out limitations of the presented work, and gives an outlook on possible future research topics to integrate explanations into everyday AI systems and thus enable the comprehensible handling of AI for all people.Seit den 1950er Jahren haben Anwendungen der KĂŒnstlichen Intelligenz (KI) die Menschen in ihren Bann gezogen. Diese Faszination wurde jedoch stets von ErnĂŒchterung ĂŒber die Grenzen dieser Technologie begleitet. Heute werden Methoden des maschinellen Lernens wie Deep Neural Networks (DNN) erfolgreich fĂŒr verschiedene Aufgaben eingesetzt. Doch auch diese Methoden haben ihre Grenzen: Durch ihre KomplexitĂ€t sind ihre Entscheidungen fĂŒr den Menschen nicht mehr nachvollziehbar - sie sind Black-Boxes. Der Forschungszweig der ErklĂ€rbaren KI (engl. XAI) hat sich diesem Problem angenommen und untersucht, wie man KI-Entscheidungen nachvollziehbar machen kann. Dieser Wunsch ist nicht neu. In den 1970er Jahren beschĂ€ftigten sich die Entwickler von intrinsisch erklĂ€rbaren KI-AnsĂ€tzen, so genannten White-Boxes (z. B. regelbasierte Systeme), mit KI-ErklĂ€rungen. Heutzutage, mit dem zunehmenden Einsatz von KI-Systemen in allen Lebensbereichen, wird die Gestaltung nachvollziehbarer Systeme immer wichtiger. Die Entwicklung solcher Systeme ist Teil der Menschzentrierten KI (engl. HCAI) Forschung, die menschliche BedĂŒrfnisse und FĂ€higkeiten in die Gestaltung von KI-Schnittstellen integriert. DafĂŒr ist ein VerstĂ€ndnis darĂŒber erforderlich, wie Menschen XAI wahrnehmen und wie KI-ErklĂ€rungen die Interaktion zwischen Mensch und KI beeinflussen. Eine der offenen Fragen betrifft die Untersuchung von XAI fĂŒr Endnutzer, d.h. Menschen, die keine Expertise in KI haben, aber mit solchen Systemen interagieren oder von deren Entscheidungen betroffen sind. In dieser Dissertation wird untersucht, wie sich verschiedene Stufen interaktiver XAI von White- und Black-Box-KI-Systemen auf die Wahrnehmung der Endnutzer auswirken. Basierend auf einem interdisziplinĂ€ren Konzept, das in dieser Arbeit vorgestellt wird, wird untersucht, wie der Inhalt, die Art und die Schnittstelle von ErklĂ€rungen von DNN (Black-Box) und regelbasierten Systemen (White-Box) von Endnutzern wahrgenommen werden. Wie XAI die mentalen Modelle, das Vertrauen, die Selbstwirksamkeit, die kognitive Belastung und den emotionalen Zustand der Endnutzer in Bezug auf das KI-System beeinflusst, steht im Mittelpunkt der Untersuchung. Zu Beginn der Arbeit werden allgemeine Konzepte zu KI, ErklĂ€rungen und psychologische Konstrukte von mentalen Modellen, Vertrauen, Selbstwirksamkeit, kognitiver Belastung und Emotionen vorgestellt. Anschließend werden verwandte Arbeiten bezĂŒglich dem Design und der Untersuchung von XAI fĂŒr Nutzer prĂ€sentiert. Diese dienen als Grundlage fĂŒr das in dieser Dissertation vorgestellte Konzept einer Menschzentrierten ErklĂ€rbaren KI (engl. HC-XAI), das einen XAI-Designansatz mit Nutzerevaluationen kombiniert. Die Autorin verfolgt einen interdisziplinĂ€ren Ansatz, der Wissen aus den Forschungsbereichen (X)AI, Mensch-Computer-Interaktion und Psychologie integriert. Auf der Grundlage dieses interdisziplinĂ€ren Konzepts wird ein fĂŒnfstufiger Ansatz abgeleitet und im empirischen Teil dieser Arbeit auf exemplarische Umfragen und Experimente und angewendet. Zur Veranschaulichung der ersten beiden Schritte wird ein Persona-Ansatz fĂŒr HC-XAI vorgestellt und darauf aufbauend eine Vorlage fĂŒr den Entwurf von Personas bereitgestellt. Um die Verwendung der Vorlage zu veranschaulichen, werden drei Umfragen prĂ€sentiert, in denen Endnutzer zu ihren Einstellungen und Erwartungen gegenĂŒber KI und XAI befragt werden. Die aus den Umfragedaten generierten Personas zeigen, dass es den Endnutzern oft an Wissen ĂŒber XAI mangelt und dass ihre Wahrnehmung dessen von demografischen und persönlichkeitsbezogenen Merkmalen abhĂ€ngt. Die Schritte drei bis fĂŒnf befassen sich mit der Gestaltung von XAI fĂŒr konkrete Anwendungen. Hierzu werden verschiedene Stufen interaktiver XAI vorgestellt und in Experimenten mit Endanwendern untersucht. Zu diesem Zweck werden zwei regelbasierte Systeme (White-Box) und vier auf DNN basierende Systeme (Black-Box) verwendet. Diese werden fĂŒr drei Zwecke eingesetzt: Kooperation & Kollaboration, Bildung und medizinische EntscheidungsunterstĂŒtzung. Hierzu wurden sechs Nutzerstudien durchgefĂŒhrt, die sich in der InteraktivitĂ€t des verwendeten XAI-Systems unterschieden. Die Ergebnisse zeigen, dass das Vertrauen und die mentalen Modelle der Endnutzer in KI stark vom Nutzungskontext und der Gestaltung der ErklĂ€rung selbst abhĂ€ngen. Es hat sich beispielsweise gezeigt, dass ErklĂ€rungen, die von einem virtuellen Agenten vermittelt werden, das Vertrauen fördern. Auch der Inhalt und die Art der ErklĂ€rungen werden von den Nutzern unterschiedlich wahrgenommen. Die Studien zeigen zudem, dass Endnutzer in unterschiedlichen Anwendungskontexten von XAI den Wunsch nach interaktiven ErklĂ€rungen verspĂŒren. Die Dissertation schließt mit einer Zusammenfassung des wissenschaftlichen Beitrags, weist auf Grenzen der vorgestellten Arbeit hin und gibt einen Ausblick auf mögliche zukĂŒnftige Forschungsthemen, um ErklĂ€rungen in alltĂ€gliche KI-Systeme zu integrieren und damit den verstĂ€ndlichen Umgang mit KI fĂŒr alle Menschen zu ermöglichen

    Management: A bibliography for NASA managers (supplement 21)

    Get PDF
    This bibliography lists 664 reports, articles and other documents introduced into the NASA scientific and technical information system in 1986. Items are selected and grouped according to their usefulness to the manager as manager. Citations are grouped into ten subject categories: human factors and personnel issues; management theory and techniques; industrial management and manufacturing; robotics and expert systems; computers and information management; research and development; economics, costs, and markets; logistics and operations management; reliability and quality control; and legality, legislation, and policy

    The Consequences of Mobility:Linguistic and Sociocultural Contact Zones

    Get PDF
    • 

    corecore