10,366 research outputs found

    An Evaluation Schema for the Ethical Use of Autonomous Robotic Systems in Security Applications

    Get PDF
    We propose a multi-step evaluation schema designed to help procurement agencies and others to examine the ethical dimensions of autonomous systems to be applied in the security sector, including autonomous weapons systems

    Assessing the Decision-Making Process in Human-Robot Collaboration Using a Lego-like EEG Headset

    Get PDF
    Human-robot collaboration (HRC) has become an emerging field, where the use of a robotic agent has been shifted from a supportive machine to a decision-making collaborator. A variety of factors can influence the effectiveness of decision-making processes during HRC, including the system-related (e.g., robot capability) and human-related (e.g., individual knowledgeability) factors. As a variety of contextual factors can significantly impact the human-robot decision-making process in collaborative contexts, the present study adopts a Lego-like EEG headset to collect and examine human brain activities and utilizes multiple questionnaires to evaluate participants’ cognitive perceptions toward the robot. A user study was conducted where two levels of robot capabilities (high vs. low) were manipulated to provide system recommendations. The participants were also identified into two groups based on their computational thinking (CT) ability. The EEG results revealed that different levels of CT abilities trigger different brainwaves, and the participants’ trust calibration of the robot also varies the resultant brain activities

    A Systematic Review on Fostering Appropriate Trust in Human-AI Interaction

    Full text link
    Appropriate Trust in Artificial Intelligence (AI) systems has rapidly become an important area of focus for both researchers and practitioners. Various approaches have been used to achieve it, such as confidence scores, explanations, trustworthiness cues, or uncertainty communication. However, a comprehensive understanding of the field is lacking due to the diversity of perspectives arising from various backgrounds that influence it and the lack of a single definition for appropriate trust. To investigate this topic, this paper presents a systematic review to identify current practices in building appropriate trust, different ways to measure it, types of tasks used, and potential challenges associated with it. We also propose a Belief, Intentions, and Actions (BIA) mapping to study commonalities and differences in the concepts related to appropriate trust by (a) describing the existing disagreements on defining appropriate trust, and (b) providing an overview of the concepts and definitions related to appropriate trust in AI from the existing literature. Finally, the challenges identified in studying appropriate trust are discussed, and observations are summarized as current trends, potential gaps, and research opportunities for future work. Overall, the paper provides insights into the complex concept of appropriate trust in human-AI interaction and presents research opportunities to advance our understanding on this topic.Comment: 39 Page
    corecore