3,194 research outputs found

    A Comparison of Visualisation Methods for Disambiguating Verbal Requests in Human-Robot Interaction

    Full text link
    Picking up objects requested by a human user is a common task in human-robot interaction. When multiple objects match the user's verbal description, the robot needs to clarify which object the user is referring to before executing the action. Previous research has focused on perceiving user's multimodal behaviour to complement verbal commands or minimising the number of follow up questions to reduce task time. In this paper, we propose a system for reference disambiguation based on visualisation and compare three methods to disambiguate natural language instructions. In a controlled experiment with a YuMi robot, we investigated real-time augmentations of the workspace in three conditions -- mixed reality, augmented reality, and a monitor as the baseline -- using objective measures such as time and accuracy, and subjective measures like engagement, immersion, and display interference. Significant differences were found in accuracy and engagement between the conditions, but no differences were found in task time. Despite the higher error rates in the mixed reality condition, participants found that modality more engaging than the other two, but overall showed preference for the augmented reality condition over the monitor and mixed reality conditions

    a human in the loop cyber physical system for collaborative assembly in smart manufacturing

    Get PDF
    Abstract Industry 4.0 rose with the introduction of cyber-physical systems (CPS) and Internet of things (IoT) inside manufacturing systems. CPS represent self-controlled physical processes, having tight networking capabilities and efficient interfaces for human interaction. The interactive dimension of CPS reaches its maximum when defined in terms of natural human-machine interfaces (NHMI), i.e., those reducing the technological barriers required for the interaction. This paper presents a NHMI bringing the human decision-making capabilities inside the cybernetic control loop of a smart manufacturing assembly system. The interface allows to control, coordinate and cooperate with an industrial cobot during the task execution

    An Augmented Interface to Display Industrial Robot Faults

    Get PDF
    Technology advancement is changing the way industrial factories have to face an increasingly complex and competitive market. The fourth industrial revolution (known as industry 4.0) is also changing how human workers have to carry out tasks and actions. In fact, it is no longer impossible to think of a scenario in which human operators and industrial robots work side-by-side, sharing the same environment and tools. To realize a safe work environment, workers should trust robots as well as they trust human operators. Such goal is indeed complex to achieve, especially when workers are under stress conditions, such as when a fault occurs and the human operators are no longer able to understand what is happening in the industrial manipulator. Indeed, Augmented Reality (AR) can help workers to visualize in real-time robots’ faults. This paper proposes an augmented system that assists human workers to recognize and visualize errors, improving their awareness of the system. The system has been tested using both an AR see-through device and a smartphone

    Augmented Reality and Robotics: A Survey and Taxonomy for AR-enhanced Human-Robot Interaction and Robotic Interfaces

    Get PDF
    This paper contributes to a taxonomy of augmented reality and robotics based on a survey of 460 research papers. Augmented and mixed reality (AR/MR) have emerged as a new way to enhance human-robot interaction (HRI) and robotic interfaces (e.g., actuated and shape-changing interfaces). Recently, an increasing number of studies in HCI, HRI, and robotics have demonstrated how AR enables better interactions between people and robots. However, often research remains focused on individual explorations and key design strategies, and research questions are rarely analyzed systematically. In this paper, we synthesize and categorize this research field in the following dimensions: 1) approaches to augmenting reality; 2) characteristics of robots; 3) purposes and benefits; 4) classification of presented information; 5) design components and strategies for visual augmentation; 6) interaction techniques and modalities; 7) application domains; and 8) evaluation strategies. We formulate key challenges and opportunities to guide and inform future research in AR and robotics

    Augmented Reality in Industry 4.0

    Get PDF
    Since the origins of Augmented Reality (AR), industry has always been one of its prominent application domains. The recent advances in both portable and wearable AR devices and the new challenges introduced by the fourth industrial revolution (renowned as industry 4.0) further enlarge the applicability of AR to improve the productiveness and to enhance the user experience. This paper provides an overview on the most important applications of AR regarding the industry domain. Key among the issues raised in this paper are the various applications of AR that enhance the user's ability to understand the movement of mobile robot, the movements of a robot arm and the forces applied by a robot. It is recommended that, in view of the rising need for both users and data privacy, technologies which compose basis for Industry 4.0 will need to change their own way of working to embrace data privacy
    • …
    corecore