64,811 research outputs found

    Natural Language Generation enhances human decision-making with uncertain information

    Full text link
    Decision-making is often dependent on uncertain data, e.g. data associated with confidence scores or probabilities. We present a comparison of different information presentations for uncertain data and, for the first time, measure their effects on human decision-making. We show that the use of Natural Language Generation (NLG) improves decision-making under uncertainty, compared to state-of-the-art graphical-based representation methods. In a task-based study with 442 adults, we found that presentations using NLG lead to 24% better decision-making on average than the graphical presentations, and to 44% better decision-making when NLG is combined with graphics. We also show that women achieve significantly better results when presented with NLG output (an 87% increase on average compared to graphical presentations).Comment: 54th annual meeting of the Association for Computational Linguistics (ACL), Berlin 201

    Data-to-Text Generation Improves Decision-Making Under Uncertainty

    Get PDF
    Decision-making is often dependent on uncertain data, e.g. data associated with confidence scores or probabilities. This article presents a comparison of different information presentations for uncertain data and, for the first time, measures their effects on human decision-making, in the domain of weather forecast generation. We use a game-based setup to evaluate the different systems. We show that the use of Natural Language Generation (NLG) enhances decision-making under uncertainty, compared to state-of-the-art graphical-based representation methods.In a task-based study with 442 adults, we found that presentations using NLG led to 24% better decision-making on average than the graphical presentations, and to 44% better decision-making when NLG is combined with graphics. We also show that women achieve significantly better results when presented with NLG output (an 87% increase on average compared to graphical presentations). Finally, we present a further analysis of demographic data and its impact on decision-making, and we discuss implications for future NLG systems

    An information assistant system for the prevention of tunnel vision in crisis management

    Get PDF
    In the crisis management environment, tunnel vision is a set of bias in decision makers’ cognitive process which often leads to incorrect understanding of the real crisis situation, biased perception of information, and improper decisions. The tunnel vision phenomenon is a consequence of both the challenges in the task and the natural limitation in a human being’s cognitive process. An information assistant system is proposed with the purpose of preventing tunnel vision. The system serves as a platform for monitoring the on-going crisis event. All information goes through the system before arrives at the user. The system enhances the data quality, reduces the data quantity and presents the crisis information in a manner that prevents or repairs the user’s cognitive overload. While working with such a system, the users (crisis managers) are expected to be more likely to stay aware of the actual situation, stay open minded to possibilities, and make proper decisions

    Challenges in Collaborative HRI for Remote Robot Teams

    Get PDF
    Collaboration between human supervisors and remote teams of robots is highly challenging, particularly in high-stakes, distant, hazardous locations, such as off-shore energy platforms. In order for these teams of robots to truly be beneficial, they need to be trusted to operate autonomously, performing tasks such as inspection and emergency response, thus reducing the number of personnel placed in harm's way. As remote robots are generally trusted less than robots in close-proximity, we present a solution to instil trust in the operator through a `mediator robot' that can exhibit social skills, alongside sophisticated visualisation techniques. In this position paper, we present general challenges and then take a closer look at one challenge in particular, discussing an initial study, which investigates the relationship between the level of control the supervisor hands over to the mediator robot and how this affects their trust. We show that the supervisor is more likely to have higher trust overall if their initial experience involves handing over control of the emergency situation to the robotic assistant. We discuss this result, here, as well as other challenges and interaction techniques for human-robot collaboration.Comment: 9 pages. Peer reviewed position paper accepted in the CHI 2019 Workshop: The Challenges of Working on Social Robots that Collaborate with People (SIRCHI2019), ACM CHI Conference on Human Factors in Computing Systems, May 2019, Glasgow, U

    RankME: Reliable Human Ratings for Natural Language Generation

    Full text link
    Human evaluation for natural language generation (NLG) often suffers from inconsistent user ratings. While previous research tends to attribute this problem to individual user preferences, we show that the quality of human judgements can also be improved by experimental design. We present a novel rank-based magnitude estimation method (RankME), which combines the use of continuous scales and relative assessments. We show that RankME significantly improves the reliability and consistency of human ratings compared to traditional evaluation methods. In addition, we show that it is possible to evaluate NLG systems according to multiple, distinct criteria, which is important for error analysis. Finally, we demonstrate that RankME, in combination with Bayesian estimation of system quality, is a cost-effective alternative for ranking multiple NLG systems.Comment: Accepted to NAACL 2018 (The 2018 Conference of the North American Chapter of the Association for Computational Linguistics

    E-mail and Direct Participation in Decision Making: A Literature Review

    Get PDF
    This paper reviews the literature on the effects of the use of e-mail on direct participation in decision making (PDM) in organisations. After a brief review of the organisational literature on participation the paper distinguishes e-mail theories on direct participation in three different theoretical perspectives. Then the paper focuses the attention on the role of e-mail in affecting task type, vertical and horizontal communication and their consequences for PDM. Finally the paper presents indications and open questions for future research.email, e-mail, decision making, participation in decision making, literature review,

    The Sound of Falling Trees: Integrating Environmental Justice Principles into the Climate Change Framework for Reducing Emissions from Deforestation and Degradation (REDD)

    Get PDF
    Charitable giving is of great value to society. In particular, wealthy individuals and their families have the ability to make a significant impact on society. Many research papers and wealth briefings try to understand the multi-billion dollar global charitable giving market. These studies have provided valuable insights, but often miss the viewpoint of High Net Worth Individuals (HNWIs). Our comparative research provides a unique perspective on wealthy individuals in France and in the Netherlands. It is the first research to use the same methods in two different countries, which allows us to make solid comparisons. We asked 961 High Net Worth Individuals about their charitable giving behaviour and their knowledge of and interest in impact investing. What causes do our clients value most? How much do they give annually? And how does charitable giving relate to impact investing for the clients? Does the financial return or social return drive individuals to invest with impact? Please join us in this study to explore charitable giving from the giver’s perspective

    An AI Chatbot for Explaining Deep Reinforcement Learning Decisions of Service-oriented Systems

    Full text link
    Deep Reinforcement Learning (Deep RL) is increasingly used to cope with the open-world assumption in service-oriented systems. Deep RL was successfully applied to problems such as dynamic service composition, job scheduling, and offloading, as well as service adaptation. While Deep RL offers many benefits, understanding the decision-making of Deep RL is challenging because its learned decision-making policy essentially appears as a black box. Yet, understanding the decision-making of Deep RL is key to help service developers perform debugging, support service providers to comply with relevant legal frameworks, and facilitate service users to build trust. We introduce Chat4XAI to facilitate the understanding of the decision-making of Deep RL by providing natural-language explanations. Compared with visual explanations, the reported benefits of natural-language explanations include better understandability for non-technical users, increased user acceptance and trust, as well as more efficient explanations. Chat4XAI leverages modern AI chatbot technology and dedicated prompt engineering. Compared to earlier work on natural-language explanations using classical software-based dialogue systems, using an AI chatbot eliminates the need for eliciting and defining potential questions and answers up-front. We prototypically realize Chat4XAI using OpenAI's ChatGPT API and evaluate the fidelity and stability of its explanations using an adaptive service exemplar.Comment: To be published at 21st Int'l Conference on Service-Oriented Computing (ICSOC 2023), Rome, Italy, November 28-December 1, 2023, ser. LNCS, F. Monti, S. Rinderle-Ma, A. Ruiz Cortes, Z. Zheng, M. Mecella, Eds., Springer, 202
    • 

    corecore