499,099 research outputs found

    Exploring the Effects of Persuasive Designs of Intelligent Advice-Giving Systems on Users’ Trust Perceptions, Advice Acceptance, and Reuse Intentions

    Get PDF
    With artificial intelligence (AI) penetrating into a broad range of industries in the current age, it has an impact on our daily living in a more and more profound way. Interacting with AI-based systems for advice has become a common practice as well. As advice-giving systems (AGS) become more cognitive and human-like, they can influence users’ decision-making to a new level. Therefore, it becomes increasingly important to explore this new type of intelligent system and examine how users perceive and react to the system’s persuasive influence. Based on the persuasion knowledge model, this paper identifies various persuasive designs (anthropomorphic features, explanation facilities, and intervention styles) and studies how they affect users’ knowledge levels, trust perceptions (cognitive, affective), and eventually their acceptance of advice (behavioral trust) and reuse intentions. The research model has been tested in an online experiment and collected 442 valid responses. In general, the findings give empirical support for the proposed research model in the paper. The study contributes to (1) the human-computer interaction literature on the effectiveness of different persuasive design characteristics of intelligent AGS. (2) to traditional decision support systems literature on the mechanism that users use under the persuasive influence of the new type of intelligent AGS (persuasive decision-aid systems). (3) to the trust in automation literature by studying various types of trust toward intelligent AGS and their relationships. (4) to the persuasion literature by incorporating the persuasion knowledge model to understand users’ attitudes and behaviors toward intelligent agents. (5) to the literature on algorithm aversion and algorithm appreciation by resolving the contradictory findings with a holistic theoretical framework. (6) to the anthropomorphism literature by exploring various aspects of anthropomorphism perceptions on trust. The paper also made insightful implications for practice

    A Persistent Simulation Environment for Autonomous Systems

    Get PDF
    The age of Autonomous Unmanned Aircraft Systems (AUAS) is creating new challenges for the accreditation and certification requiring new standards, policies and procedures that sanction whether a UAS is safe to fly. Establishing a basis for certification of autonomous systems via research into trust and trustworthiness is the focus of Autonomy Teaming and TRAjectories for Complex Trusted Operational Reliability (ATTRACTOR), a new NASA Convergent Aeronautics Solution (CAS) project. Simulation Environments to test and evaluate AUAS decision making may be a low-cost solution to help certify that various AUAS systems are trustworthy enough to be allowed to fly in current general and commercial aviation airspace. NASA is working to build a peer-to-peer persistent simulation (P3 Sim) environment. The P3 Sim will be a Massively Multiplayer Online (MMO) environment were AUAS avatars can interact with a complex dynamic environment and each other. The focus of the effort is to provide AUAS researchers a low-cost intuitive testing environment that will aid training for and assessment of decisions made by autonomous systems such as AUAS. This presentation focuses on the design approach and challenges faced in development of the P3 Sim Environment is support of investigating trustworthiness of autonomous systems

    Current Concepts and Trends in Human-Automation Interaction

    Get PDF
    Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugÀnglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.The purpose of this panel was to provide a general overview and discussion of some of the most current and controversial concepts and trends in human-automation interaction. The panel was composed of eight researchers and practitioners. The panelists are well-known experts in the area and offered differing views on a variety of different human-automation topics. The range of concepts and trends discussed in this panel include: general taxonomies regarding stages and levels of automation and function allocation, individualized adaptive automation, automation-induced complacency, economic rationality and the use of automation, the potential utility of false alarms, the influence of different types of false alarms on trust and reliance, and a system-wide theory of trust in multiple automated aids

    Misplaced Trust: Measuring the Interference of Machine Learning in Human Decision-Making

    Full text link
    ML decision-aid systems are increasingly common on the web, but their successful integration relies on people trusting them appropriately: they should use the system to fill in gaps in their ability, but recognize signals that the system might be incorrect. We measured how people's trust in ML recommendations differs by expertise and with more system information through a task-based study of 175 adults. We used two tasks that are difficult for humans: comparing large crowd sizes and identifying similar-looking animals. Our results provide three key insights: (1) People trust incorrect ML recommendations for tasks that they perform correctly the majority of the time, even if they have high prior knowledge about ML or are given information indicating the system is not confident in its prediction; (2) Four different types of system information all increased people's trust in recommendations; and (3) Math and logic skills may be as important as ML for decision-makers working with ML recommendations.Comment: 10 page

    Theoretical, Measured and Subjective Responsibility in Aided Decision Making

    Full text link
    When humans interact with intelligent systems, their causal responsibility for outcomes becomes equivocal. We analyze the descriptive abilities of a newly developed responsibility quantification model (ResQu) to predict actual human responsibility and perceptions of responsibility in the interaction with intelligent systems. In two laboratory experiments, participants performed a classification task. They were aided by classification systems with different capabilities. We compared the predicted theoretical responsibility values to the actual measured responsibility participants took on and to their subjective rankings of responsibility. The model predictions were strongly correlated with both measured and subjective responsibility. A bias existed only when participants with poor classification capabilities relied less-than-optimally on a system that had superior classification capabilities and assumed higher-than-optimal responsibility. The study implies that when humans interact with advanced intelligent systems, with capabilities that greatly exceed their own, their comparative causal responsibility will be small, even if formally the human is assigned major roles. Simply putting a human into the loop does not assure that the human will meaningfully contribute to the outcomes. The results demonstrate the descriptive value of the ResQu model to predict behavior and perceptions of responsibility by considering the characteristics of the human, the intelligent system, the environment and some systematic behavioral biases. The ResQu model is a new quantitative method that can be used in system design and can guide policy and legal decisions regarding human responsibility in events involving intelligent systems

    Taking ownership: The story of a successful partnership for change in a Pacific Island science teacher education setting.

    Get PDF
    This paper explores an example of a partnership approach that appears to be producing sustainable change in a Pacific Islands education setting. The people involved report on the way science education staff from the Solomon Islands School of Education (SOE) and staff from the Faculty of Education University of Waikato (UOW), New Zealand worked together on the redevelopment of undergraduate science education courses for the SOE. Together we sought to identify significant factors supporting the process. The development required significant change and posed a number of challenges yet resulted in local staff producing high quality materials and programmes and taking ownership of ongoing development. More importantly, there was significant personal professional learning in both science education and initial teacher education for local Solomon Islands staff. Factors contributing to the success of the partnership are explored through the perceptions of the participants and include the quality of relationship, mutual respect, emphasis on conceptual agreement when working together, and the involvement of local staff in decision-making

    Towards a pragmatic approach for dealing with uncertainties in water management practice

    Get PDF
    Management of water resources is afflicted with uncertainties. Nowadays it is facing more and new uncertainties since pace and dimension of changes (e.g. climatic, demographic) are accelerating and are likely to increase even more in the future. Hence it is crucial to find pragmatic ways to deal with these uncertainties in water management. So far, decision-making under uncertainty in water management is based on either intuition, heuristics and experience of water managers or on expert assessments all of which are only of limited use for water managers in practice. We argue for an analytical yet pragmatic approach to enable practitioners to deal with uncertainties in a more explicit and systematic way and allow for better informed decisions. Our approach is based on the concept of framing, referring to the different ways in which people make sense of the world and of the uncertainties. We applied and tested recently developed parameters that aim to shed light on the framing of uncertainty in two sub-basins of the Rhine. We present and discuss the results of a series of stakeholder interactions in the two basins aimed at developing strategies for improving dealing with uncertainties. The strategies are synthesized in a cross-checking list based on the uncertainty framing parameters as a hands-on tool for systematically identifying improvement options when dealing with uncertainty in water management practice. We conclude with suggestions for testing the developed check-list as a tool for decision aid in water management practice. Key words: water management, future uncertainties, framing of uncertainties, hands-on decision aid, tools for practice, robust strategies, social learnin

    Future World Giving: Building Trust in Charitable Giving

    Get PDF
    The focus of this report is trust. In order for people to give money to charity, it is clear that they must trust that charitable organisations are legitimate and will make effective use of their money. Governments have a vital role to play because they are responsible for the legislation and regulation that governs civil society organisations. This report argues that governments, particularly in emerging economies, should act now to create an enabling environment that encourages the next generation of increasingly affluent citizens to engage in giving to causes that have earned their trust
    • 

    corecore