396 research outputs found

    Probabilistic model-checking of collaborative robots: a human injury assessment in agricultural applications

    Get PDF
    Current technology has made it possible to automate a number of agricultural processes that were traditionally carried out by humans and now can be entirely performed by robotic platforms. However, there are certain tasks like soft fruit harvesting, where human skills are still required. In this case, the robot's job is to cooperate/collaborate with human workers to alleviate their physical workload and improve the harvesting efficiency. To accomplish that in a safe and reliable way, the robot should incorporate a safety system whose main goal is to reduce the risk of harming human co-workers during close human-robot interaction (HRI). In this context, this paper presents a theoretical study, addressing the safety risks of using collaborative robots in agricultural scenarios, especially in HRI situations when the robot's safety system is not completely reliable and a component may fail. The agricultural scenarios discussed in this paper include automatic harvesting, logistics operations, crop monitoring, and plant treatment using UV-C light. A human injury assessment is conducted based on converting the HRI in each agricultural scenario into a formal mathematical representation. This representation is later implemented in a probabilistic model-checking tool. We then use this tool to perform a sensitivity analysis that allows us to determine the probability that a human may get injured according to the occurrence of failures in the robot's safety system. The probabilistic modeling methodology presented in this work can be used by safety engineers as a guideline to construct their own HRI models and then use the results of the model-checking to enhance the safety and reliability of their robot's safety system architectures

    “You have no idea how much we love Casper” – Developing configurations of employees’ RPA implementation experiences

    Get PDF
    Robotic process automation (RPA) is gaining popularity in the industry and is leveraged to improve operational efficiency, quality of work, risk management, and compliance. Despite the increasing adoption of RPA in industry, academic research is lagging. In particular, despite the often drastic changes in employees’ work tasks and processes, there is a lack of research that explores how human employees experience the implementation of RPA. This is important to understand as their experiences affect their interaction with the technology and, ultimately, their adoption and use, which is crucial to realise the benefits of RPA. To address this research gap, we conducted a case study in a financial institution in New Zealand and interviewed 18 employees to develop configurations of employees’ RPA implementation experiences. Our findings may inform implementation and change management strategies but also line-managers to accommodate employees’ needs better and to leverage the potentials of true human-robot collaboration

    Structuring AI Teammate Communication: An Exploration of AI\u27s Communication Strategies in Human-AI Teams

    Get PDF
    In the past decades, artificial intelligence (AI) has been implemented in various domains to facilitate humans in their work, such as healthcare and the automotive industry. Such application of AI has led to increasing attention on human-AI teaming, where AI closely collaborates with humans as a teammate. AI as a teammate is expected to have the ability to coordinate with humans by sharing task-related information, predicting other teammates’ behaviors, and progressing team tasks accordingly. To complete these team activities effectively, AI teammates must communicate with humans, such as sharing updates and checking team progress. Even though communication is a core element of teamwork that helps to achieve effective coordination, how to design and structure human-AI communication in teaming environments still remains unclear. Given the context-dependent characteristics of communication, research on human-AI teaming communication needs to narrow down and focus on specific communication elements/components, such as the proactivity of communication and communication content. In doing so, this dissertation explores how AI teammates’ communication should be structured by modifying communication components through three studies, each of which details a critical component of effective AI communication: (1) communication proactivity, (2) communication content (explanation), and (3) communication approach (verbal vs. non-verbal). These studies provide insights into how AI teammates’ communication ii can be integrated into teamwork and how to design AI teammate communication in human-AI teaming. Study 1 explores an important communication element, communication proactivity, and its impact on team processes and team performance. Specifically, communication proactivity in this dissertation refers to whether an AI teammate proactively communicates with human teammates, i.e., proactively pushing information to human teammates. Experimental analysis shows that AI teammates’ proactive communication plays a crucial role in impacting human perceptions, such as perceived teammate performance and satisfaction with the teammate. Importantly, teams with a non-proactive communication AI teammate increase team performance more than teams with a proactive communication AI as the human and the AI collaborate more. This study identifies the positive impact of AI being proactive in communication at the initial stage of task coordination, as well as the potential need for AI’s flexibility in their communication proactivity (i.e., once human and AI teammates’ coordination pattern forms, AI can be non-proactive in communication). Study 2 examines communication content by focusing on AI’s explanation and its impact on human perceptions in teaming environments. Results indicate that AI’s explanation, as part of communication content, does not always positively impact human trust in human-AI teaming. Instead, the impact of AI’s explanations on human perceptions depends on specific collaboration scenarios. Specifically, AI’s explanations facilitate trust in the AI teammate when explaining why AI disobeys humans’ orders, but hinder trust when explaining why AI lies to humans. In addition, AI giving an explanation of why they ignored the human teammate’s injury was perceived to be more effective than AI not providing such an explanation. The findings emphasize the context-dependent characteristic of AI’s communication content with a focus on AI’s explanation of their actions. iii Study 3 investigates AI’s communication approach, which was manipulated as verbal vs. non-verbal communication. Results indicate that AI teammates’ verbal/nonverbal communication does not impact human trust in the AI teammate, but facilitates the maintenance of humans’ situation awareness in task coordination. In addition, AI with non-verbal communication is perceived as having lower communication quality and lower performance. Importantly, AI with non-verbal communication has better team performance in human-human-AI teams than human-AI-AI teams, whereas AI with verbal communication has better team performance in human-AI-AI teams than human-human-AI teams. These three studies together address multiple research gaps in human-AI team communication and provide a holistic view of the design and structure of AI’s communication by examining three specific aspects of communication in human-AI teaming. In addition, each study in this dissertation proposes practical design implications on AI’s communication in human-AI teams, which will assist AI designers and developers to create better AI teammates that facilitate humans in teaming environments

    Human-Machine Communication: Complete Volume. Volume 6

    Get PDF
    his is the complete volume of HMC Volume 6

    Human-Machine Communication: Complete Volume. Volume 4

    Get PDF
    This is the complete volume of HMC Volume 4
    • …
    corecore