2,601 research outputs found

    Virtual reality for safe testing and development in collaborative robotics: challenges and perspectives

    Get PDF
    Collaborative robots (cobots) could help humans in tasks that are mundane, dangerous or where direct human contact carries risk. Yet, the collaboration between humans and robots is severely limited by the aspects of the safety and comfort of human operators. In this paper, we outline the use of extended reality (XR) as a way to test and develop collaboration with robots. We focus on virtual reality (VR) in simulating collaboration scenarios and the use of cobot digital twins. This is specifically useful in situations that are difficult or even impossible to safely test in real life, such as dangerous scenarios. We describe using XR simulations as a means to evaluate collaboration with robots without putting humans at harm. We show how an XR setting enables combining human behavioral data, subjective self-reports, and biosignals signifying human comfort, stress and cognitive load during collaboration. Several works demonstrate XR can be used to train human operators and provide them with augmented reality (AR) interfaces to enhance their performance with robots. We also provide a first attempt at what could become the basis for a human–robot collaboration testing framework, specifically for designing and testing factors affecting human–robot collaboration. The use of XR has the potential to change the way we design and test cobots, and train cobot operators, in a range of applications: from industry, through healthcare, to space operations.info:eu-repo/semantics/publishedVersio

    Responsible Human-Robot Interaction with Anthropomorphic Service Robots: State of the Art of an Interdisciplinary Research Challenge

    Get PDF
    Anthropomorphic service robots are on the rise. The more capable they become and the more regular they are applied in real-world settings, the more critical becomes the responsible design of human-robot interaction (HRI) with special attention to human dignity, transparency, privacy, and robot compliance. In this paper we review the interdisciplinary state of the art relevant for the responsible design of HRI. Furthermore, directions for future research on the responsible design of HRI with anthropomorphic service robots are suggested

    Assessing the Decision-Making Process in Human-Robot Collaboration Using a Lego-like EEG Headset

    Get PDF
    Human-robot collaboration (HRC) has become an emerging field, where the use of a robotic agent has been shifted from a supportive machine to a decision-making collaborator. A variety of factors can influence the effectiveness of decision-making processes during HRC, including the system-related (e.g., robot capability) and human-related (e.g., individual knowledgeability) factors. As a variety of contextual factors can significantly impact the human-robot decision-making process in collaborative contexts, the present study adopts a Lego-like EEG headset to collect and examine human brain activities and utilizes multiple questionnaires to evaluate participants’ cognitive perceptions toward the robot. A user study was conducted where two levels of robot capabilities (high vs. low) were manipulated to provide system recommendations. The participants were also identified into two groups based on their computational thinking (CT) ability. The EEG results revealed that different levels of CT abilities trigger different brainwaves, and the participants’ trust calibration of the robot also varies the resultant brain activities

    Review of Research on Human Trust in Artificial Intelligence

    Get PDF
    Artificial Intelligence (AI) represents today\u27s most advanced technologies that aim to imitate human intelligence. Whether AI can successfully be integrated into society depends on whether it can gain users’ trust. We conduct a comprehensive review of recent research on human trust in AI and uncover the significant role of AI’s transparency, reliability, performance, and anthropomorphism in developing trust. We also review how trust is diversely built and calibrated, and how human and environmental factors affect human trust in AI. Based on the review, the most promising future research directions are proposed

    Attribution Biases and Trust Development in Physical Human-Machine Coordination: Blaming Yourself, Your Partner or an Unexpected Event

    Get PDF
    abstract: Reading partners’ actions correctly is essential for successful coordination, but interpretation does not always reflect reality. Attribution biases, such as self-serving and correspondence biases, lead people to misinterpret their partners’ actions and falsely assign blame after an unexpected event. These biases thus further influence people’s trust in their partners, including machine partners. The increasing capabilities and complexity of machines allow them to work physically with humans. However, their improvements may interfere with the accuracy for people to calibrate trust in machines and their capabilities, which requires an understanding of attribution biases’ effect on human-machine coordination. Specifically, the current thesis explores how the development of trust in a partner is influenced by attribution biases and people’s assignment of blame for a negative outcome. This study can also suggest how a machine partner should be designed to react to environmental disturbances and report the appropriate level of information about external conditions.Dissertation/ThesisMasters Thesis Human Systems Engineering 201

    Bringing Human Robot Interaction towards _Trust and Social Engineering

    Get PDF
    Robots started their journey in books and movies; nowadays, they are becoming an important part of our daily lives: from industrial robots, passing through entertainment robots, and reaching social robotics in fields like healthcare or education. An important aspect of social robotics is the human counterpart, therefore, there is an interaction between the humans and robots. Interactions among humans are often taken for granted as, since children, we learn how to interact with each other. In robotics, this interaction is still very immature, however, critical for a successful incorporation of robots in society. Human robot interaction (HRI) is the domain that works on improving these interactions. HRI encloses many aspects, and a significant one is trust. Trust is the assumption that somebody or something is good and reliable; and it is critical for a developed society. Therefore, in a society where robots can part, the trust they could generate will be essential for cohabitation. A downside of trust is overtrusting an entity; in other words, an insufficient alignment of the projected trust and the expectations of a morally correct behaviour. This effect could negatively influence and damage the interactions between agents. In the case of humans, it is usually exploited by scammers, conmen or social engineers - who take advantage of the people's overtrust in order to manipulate them into performing actions that may not be beneficial for the victims. This thesis tries to shed light on the development of trust towards robots, how this trust could become overtrust and be exploited by social engineering techniques. More precisely, the following experiments have been carried out: (i) Treasure Hunt, in which the robot followed a social engineering framework where it gathered personal information from the participants, improved the trust and rapport with them, and at the end, it exploited that trust manipulating participants into performing a risky action. (ii) Wicked Professor, in which a very human-like robot tried to enforce its authority to make participants obey socially inappropriate requests. Most of the participants realized that the requests were morally wrong, but eventually, they succumbed to the robot'sauthority while holding the robot as morally responsible. (iii) Detective iCub, in which it was evaluated whether the robot could be endowed with the ability to detect when the human partner was lying. Deception detection is an essential skill for social engineers and professionals in the domain of education, healthcare and security. The robot achieved 75% of accuracy in the lie detection. There were also found slight differences in the behaviour exhibited by the participants when interacting with a human or a robot interrogator. Lastly, this thesis approaches the topic of privacy - a fundamental human value. With the integration of robotics and technology in our society, privacy will be affected in ways we are not used. Robots have sensors able to record and gather all kind of data, and it is possible that this information is transmitted via internet without the knowledge of the user. This is an important aspect to consider since a violation in privacy can heavily impact the trust. Summarizing, this thesis shows that robots are able to establish and improve trust during an interaction, to take advantage of overtrust and to misuse it by applying different types of social engineering techniques, such as manipulation and authority. Moreover, robots can be enabled to pick up different human cues to detect deception, which can help both, social engineers and professionals in the human sector. Nevertheless, it is of the utmost importance to make roboticists, programmers, entrepreneurs, lawyers, psychologists, and other sectors involved, aware that social robots can be highly beneficial for humans, but they could also be exploited for malicious purposes

    Preparing for Industrial Collaborative Robots: A Literature Review of Technology Readiness and Acceptance Models

    Get PDF
    This item is only available electronically.Collaborative robots (cobots) are an emerging technology that are increasingly being introduced into organisations. However, research investigating employee attitudes towards, or assessment of factors predicting acceptance of cobots is limited. A literature review was conducted to identify reliable and parsimonious models of technology acceptance that would hold relevance when applied to cobots. Understanding and facilitating employee acceptance of such technology is important if the improved productivity, job satisfaction and cost savings associated with its implementation are to be achieved. The Technology Readiness Index (Parasuraman, 2000) and Technology Acceptance Model (Davis, 1989) were considered most appropriate as a starting point to empirically explore cobot acceptance.Thesis (M.Psych(Organisational & Human Factors)) -- University of Adelaide, School of Psychology, 201
    • 

    corecore