354,859 research outputs found

    Heuristic standards for universal design in the face of technological diversity.

    Get PDF
    CENTRAL PRINCIPLE Important technologies require validated standards for the design heuristics that are used to design and evaluate them, but not necessarily identical heuristics for every technology. BACKGROUND Heuristic standards provide a valuable toolkit with which to evaluate the accessibility of modern information society technologies (IST). But can we apply the same heuristic, generic standards to all types of technological platforms, in the face of their growing diversity e.g. websites, social websites, blogs, virtual reality applications, ambient intelligence etc (Adams, 2007)? Or would it be wiser to expect that different technologies might require different, if overlapping, standards? Can we really expect to design the interface of a modern cell phone on the same basis as for a table computer? Most impartial observers would probably say “no”. How can we introduce a systematic and thorough approach to the diverse technologies that are seen or predicted to be seen? Work in our laboratory has explored two useful questions. First, how to computer literate users perceive the different technologies? Second, how can different heuristic standards be developed where needed

    Guess the score, fostering collective intelligence in the class

    Get PDF
    This paper proposes the use of serious games as a tool to enhance collective intelligence of undergraduate and graduate students. The development of social skills of individuals in a group is related to the performance of the collective intelligence of the group manifested through the shared and collaborative development of intellectual tasks [1]. Guess the Score GS, is a serious game implemented by means of an online tool, created to foster the development, collaboration and engagement of students. It's has been designed with the intention of facilitating the development of individual’s social skills in a group in order to promote education of collective intelligence. This paper concludes that the design of learning activities using serious games as a support tool in education, generate awareness about of utilities of gaming in the collective learning environment and the fostering of collective intelligence education.Postprint (published version

    Beneficial Artificial Intelligence Coordination by means of a Value Sensitive Design Approach

    Get PDF
    This paper argues that the Value Sensitive Design (VSD) methodology provides a principled approach to embedding common values in to AI systems both early and throughout the design process. To do so, it draws on an important case study: the evidence and final report of the UK Select Committee on Artificial Intelligence. This empirical investigation shows that the different and often disparate stakeholder groups that are implicated in AI design and use share some common values that can be used to further strengthen design coordination efforts. VSD is shown to be both able to distill these common values as well as provide a framework for stakeholder coordination

    Empowerment or Engagement? Digital Health Technologies for Mental Healthcare

    Get PDF
    We argue that while digital health technologies (e.g. artificial intelligence, smartphones, and virtual reality) present significant opportunities for improving the delivery of healthcare, key concepts that are used to evaluate and understand their impact can obscure significant ethical issues related to patient engagement and experience. Specifically, we focus on the concept of empowerment and ask whether it is adequate for addressing some significant ethical concerns that relate to digital health technologies for mental healthcare. We frame these concerns using five key ethical principles for AI ethics (i.e. autonomy, beneficence, non-maleficence, justice, and explicability), which have their roots in the bioethical literature, in order to critically evaluate the role that digital health technologies will have in the future of digital healthcare

    I Cannot Tell a Lie: Emotional Intelligence as a Predictor of Deceptive Behavior

    Get PDF
    Research has identified that perceived acceptability and likelihood of lying depend on the type of lie and personality characteristics such as honesty, kindness, assertiveness, and Machiavellianism. However, this research has focused on individuals’ experiences of their own emotions and neglected to consider how an individual’s understanding of others and their emotions influences deceptive behavior. I expanded upon this research during the summer of 2018 by investigating the relationship between emotional intelligence, personal intelligence, and perceived acceptability and likelihood of telling four types of lies, which are distinguished from one another based on their motivation (altruistic, conflict avoidance, social acceptance, or self‐gain). Participants were 80 University of New Hampshire undergraduate students who completed an online survey consisting of both self‐report and ability‐based measures. Results suggest that scores on ability‐based tests of personal intelligence may be useful in predicting an individual’s likelihood of telling lies for the purpose of social acceptance. Results also indicate a significant negative correlation between self‐reported likelihood of telling social‐acceptance lies and levels of personal intelligence, indicating that those with higher personal intelligence are less likely to tell social‐acceptance lies

    Human Error Management Paying Emphasis on Decision Making and Social Intelligence -Beyond the Framework of Man-Machine Interface Design-

    Get PDF
    How latent error or violation induces a serious accident has been reviewed and a proper addressing measure of this has been proposed in the framework of decision making, emotional intelligence (EI) and social intelligence (SI) of organization and its members. It has been clarified that EI and SI play an important role in decision making. Violations frequently occur all over the world, although we definitely understand that we should not commit violations, and a secret to prevent this might exist in the enhancement of both social intelligence and reliability. The construction of social structure or system that supports organizational efforts to enhance both social intelligence and reliability would be essential. Traditional safety education emphasizes that it is possible to change attitudes or mind toward safety by means of education. In spite of thisaccidents or scandals frequently occur and never decrease. These problems must be approached on the basis of the full understanding of social intelligence and limited reasonability in decision making. Social dilemma (We do not necessarily cooperate in spite of understanding its importance, and we sometimes make decision not to select cooperative behavior. Non-cooperation gives rise to a desirable result for an individual. However, if all take non-cooperative actions, undesirable results are finally induced to all.) must be solved in some ways and the transition from relief (closed) society to global (reliability) society must be realized as a whole. New social system, where cooperative relation can be easily and reliably obtained, must be constructed to support such an approach and prevent violation-based accidents

    Visual Representation of Explainable Artificial Intelligence Methods: Design and Empirical Studies

    Get PDF
    Explainability is increasingly considered a critical component of artificial intelligence (AI) systems, especially in high-stake domains where AI systems’ decisions can significantly impact individuals. As a result, there has been a surge of interest in explainable artificial intelligence (XAI) to increase the transparency of AI systems by explaining their decisions to end-users. In particular, extensive research has focused on developing “local model-agnostic” explainable methods that generate explanations of individual predictions for any predictive model. While these explanations can support end-users in the use of AI systems through increased transparency, three significant challenges have hindered their design, implementation, and large-scale adoption in real applications. First, there is a lack of understanding of how end-users evaluate explanations. There are many critiques that explanations are based on researchers’ intuition instead of end-users’ needs. Furthermore, there is insufficient evidence on whether end-users understand these explanations or trust XAI systems. Second, it is unclear which effect explanations have on trust when they disclose different biases on AI systems’ decisions. Prior research investigating biased decisions has found conflicting evidence on explanations’ effects. Explanations can either increase trust through perceived transparency or decrease trust as end-users perceive the system as biased. Moreover, it is unclear how contingency factors influence these opposing effects. Third, most XAI methods deliver static explanations that offer end-users limited information, resulting in an insufficient understanding of how AI systems make decisions and, in turn, lower trust. Furthermore, research has found that end-users perceive static explanations as not transparent enough, as these do not allow them to investigate the factors that influence a given decision. This dissertation addresses these challenges across three studies by focusing on the overarching research question of how to design visual representations of local model-agnostic XAI methods to increase end-users’ understanding and trust. The first challenge is addressed through an iterative design process that refines the representations of explanations from four well-established model-agnostic XAI methods and a subsequent evaluation with end-users using eye-tracking technology and interviews. Afterward, a research study that takes a psychological contract violation (PCV) theory and social identity theory perspective to investigate the contingency factors of the opposing effects of explanations on end-users’ trust addresses the second challenge. Specifically, this study investigates how end-users evaluate explanations of a gender-biased AI system while controlling for their awareness of gender discrimination in society. Finally, the third challenge is addressed through a design science research project to design an interactive XAI system for end-users to increase their understanding and trust. This dissertation makes several contributions to the ongoing research on improving the transparency of AI systems by explicitly emphasizing the end-user perspective on XAI. First, it contributes to practice by providing insights that help to improve the design of explanations of AI systems’ decisions. Additionally, this dissertation provides significant theoretical contributions by contextualizing the PCV theory to gender-biased XAI systems and the contingency factors that determine whether end-users experience a PCV. Moreover, it provides insights into how end-users cognitively evaluate explanations and extends the current understanding of the impact of explanations on trust. Finally, this dissertation contributes to the design knowledge of XAI systems by proposing guidelines for designing interactive XAI systems that give end-users more control over the information they receive to help them better understand how AI systems make decisions

    The Usage and Evaluation of Anthropomorphic Form in Robot Design

    Get PDF
    There are numerous examples illustrating the application of human shape in everyday products. Usage of anthropomorphic form has long been a basic design strategy, particularly in the design of intelligent service robots. As such, it is desirable to use anthropomorphic form not only in aesthetic design but also in interaction design. Proceeding from how anthropomorphism in various domains has taken effect on human perception, we assumed that anthropomorphic form used in appearance and interaction design of robots enriches the explanation of its function and creates familiarity with robots. From many cases we have found, misused anthropomorphic form lead to user disappointment or negative impressions on the robot. In order to effectively use anthropomorphic form, it is necessary to measure the similarity of an artifact to the human form (humanness), and then evaluate whether the usage of anthropomorphic form fits the artifact. The goal of this study is to propose a general evaluation framework of anthropomorphic form for robot design. We suggest three major steps for framing the evaluation: 'measuring anthropomorphic form in appearance', 'measuring anthropomorphic form in Human-Robot Interaction', and 'evaluation of accordance of two former measurements'. This evaluation process will endow a robot an amount of humanness in their appearance equivalent to an amount of humanness in interaction ability, and then ultimately facilitate user satisfaction. Keywords: Anthropomorphic Form; Anthropomorphism; Human-Robot Interaction; Humanness; Robot Design</p
    • 

    corecore