42 research outputs found
Nonverbal Communication During Human-Robot Object Handover. Improving Predictability of Humanoid Robots by Gaze and Gestures in Close Interaction
Meyer zu Borgsen S. Nonverbal Communication During Human-Robot Object Handover. Improving Predictability of Humanoid Robots by Gaze and Gestures in Close Interaction. Bielefeld: UniversitΓ€t Bielefeld; 2020.This doctoral thesis investigates the influence of nonverbal communication on human-robot object handover. Handing objects to one another is an everyday activity where two individuals cooperatively interact. Such close interactions incorporate a lot of nonverbal communication in order to create alignment in space and time. Understanding and transferring communication cues to robots becomes more and more important as e.g. service robots are expected to closely interact with humans in the near future. Their tasks often include delivering and taking objects. Thus, handover scenarios play an important role in human-robot interaction. A lot of work in this field of research focuses on speed, accuracy, and predictability of the robotβs movement during object handover. Still, robots need to be enabled to closely interact with naive users and not only experts. In this work I present how nonverbal communication can be implemented in robots to facilitate smooth handovers. I conducted a study on people with different levels of experience exchanging objects with a humanoid robot. It became clear that especially users with only little experience in regard to interaction with robots rely heavily on the communication cues they are used to on the basis of former interactions with humans. I added different gestures with the second arm, not directly involved in the transfer, to analyze the influence on synchronization, predictability, and human acceptance. Handing an object has a special movement trajectory itself which has not only the purpose of bringing the object or hand to the position of exchange but also of socially signalizing the intention to exchange an object. Another common type of nonverbal communication is gaze. It allows guessing the focus of attention of an interaction partner and thus helps to predict the next action. In order to evaluate handover interaction performance between human and robot, I applied the developed concepts to the humanoid robot Meka M1. By adding the humanoid robot head named Floka Head to the system, I created the Floka humanoid, to implement gaze strategies that aim to increase predictability and user comfort. This thesis contributes to the field of human-robot object handover by presenting study outcomes and concepts along with an implementation of improved software modules resulting in a fully functional object handing humanoid robot from perception and prediction capabilities to behaviors enhanced and improved by features of nonverbal communication
Making intelligent systems team players: Overview for designers
This report is a guide and companion to the NASA Technical Memorandum 104738, 'Making Intelligent Systems Team Players,' Volumes 1 and 2. The first two volumes of this Technical Memorandum provide comprehensive guidance to designers of intelligent systems for real-time fault management of space systems, with the objective of achieving more effective human interaction. This report provides an analysis of the material discussed in the Technical Memorandum. It clarifies what it means for an intelligent system to be a team player, and how such systems are designed. It identifies significant intelligent system design problems and their impacts on reliability and usability. Where common design practice is not effective in solving these problems, we make recommendations for these situations. In this report, we summarize the main points in the Technical Memorandum and identify where to look for further information
Graph-based Reinforcement Learning meets Mixed Integer Programs: An application to 3D robot assembly discovery
Robot assembly discovery is a challenging problem that lives at the
intersection of resource allocation and motion planning. The goal is to combine
a predefined set of objects to form something new while considering task
execution with the robot-in-the-loop. In this work, we tackle the problem of
building arbitrary, predefined target structures entirely from scratch using a
set of Tetris-like building blocks and a robotic manipulator. Our novel
hierarchical approach aims at efficiently decomposing the overall task into
three feasible levels that benefit mutually from each other. On the high level,
we run a classical mixed-integer program for global optimization of block-type
selection and the blocks' final poses to recreate the desired shape. Its output
is then exploited to efficiently guide the exploration of an underlying
reinforcement learning (RL) policy. This RL policy draws its generalization
properties from a flexible graph-based representation that is learned through
Q-learning and can be refined with search. Moreover, it accounts for the
necessary conditions of structural stability and robotic feasibility that
cannot be effectively reflected in the previous layer. Lastly, a grasp and
motion planner transforms the desired assembly commands into robot joint
movements. We demonstrate our proposed method's performance on a set of
competitive simulated RAD environments, showcase real-world transfer, and
report performance and robustness gains compared to an unstructured end-to-end
approach. Videos are available at https://sites.google.com/view/rl-meets-milp
Verified synthesis of optimal safety controllers for human-robot collaboration
We present a tool-supported approach for the synthesis, verification and validation of the control software responsible for the safety of the human-robot interaction in manufacturing processes that use collaborative robots. In human-robot collaboration, software-based safety controllers are used to improve operational safety, e.g., by triggering shutdown mechanisms or emergency stops to avoid accidents. Complex robotic tasks and increasingly close human-robot interaction pose new challenges to controller developers and certification authorities. Key among these challenges is the need to assure the correctness of safety controllers under explicit (and preferably weak) assumptions. Our controller synthesis, verification and validation approach is informed by the process, risk analysis, and relevant safety regulations for the target application. Controllers are selected from a design space of feasible controllers according to a set of optimality criteria, are formally verified against correctness criteria, and are translated into executable code and validated in a digital twin. The resulting controller can detect the occurrence of hazards, move the process into a safe state, and, in certain circumstances, return the process to an operational state from which it can resume its original task. We show the effectiveness of our software engineering approach through a case study involving the development of a safety controller for a manufacturing work cell equipped with a collaborative robot
κ°λ° μ΄κΈ° λ¨κ³μμμ μμ AI κ°μΈλΉμ νκ° λͺ¨λΈ κ°λ°
νμλ
Όλ¬Έ(λ°μ¬) -- μμΈλνκ΅λνμ : 곡과λν μ°μ
곡νκ³Ό, 2022.2. μ€λͺ
ν.This dissertation aims to propose a user evaluation model to evaluate social AI personal assistants in the early stage of product development. Due to the rapid development of personal devices, data generated from personal devices are increasing explosively, and various personal AI services and products using these data are being launched. However, compared to the interest in AI personal assistant products, its market is still immature. In this case, it is important to understand consumer expectations and perceptions deeply and develop a product that can satisfy them to spread the product and allow general consumers to easily accept the product promptly. Accordingly, this dissertation proposes and validates a user evaluation model that can be used in the early stage of product development.
Prior to proposing this methodology, main characteristics of social AI personal assistants, the importance of user evaluation in the early stage of product development and the limitations of the existing user evaluation model were investigated in Chapter 2. Various technology acceptance models and evaluation models for social AI personal assistant products have been proposed, evaluation models that can be applied in the initial stage of product development were insufficient, however. Moreover, it was found that commonly used evaluation measures for assessment of hedonic value were much fewer compared to measures for utilitarian value. These were used as starting points of this dissertation.
In Chapter 3, the evaluation measures used in previous studies related to social AI personal assistant were collected and carefully reviewed. Through systematic review of 40 studies, the evaluation measures used in the past and limitation of related research were investigated. As a result, it was found that it was not easy to develop a prototype for evaluation, so it was possible to make the most of the products that have already been commercialized. In addition, all evaluation items used in previous studies were collected and used as the basis for the evaluation model to be proposed later. As a result of the analysis, considering the purpose of the social AI personal assistant, the role as supporting the user emotionally through social interaction with the user is important, but it was found that the evaluation measures related to hedonic value that are commonly used were still insufficient.
In Chapter 4, evaluation measures that can be used in the initial stage of product development for social AI personal assistant were selected. Selected evaluation measures were used to evaluate three types of social robots and relationship among evaluation factors were induced through this evaluation. A process was proposed to understand to various opinions related to social robots and to derive evaluation items, and a case study was conducted in which a total of 230 people evaluated three social robots concept images using the evaluation items finally selected through this process. As a result, it is shown that consumersβ attitude toward products was built through the utilitarian dimension and the hedonic dimension. In addition, there is positive relationship between ease of use and utility in the utilitarian dimension, and among aesthetic pleasure, attractiveness of personality, affective value in the hedonic dimension. Moreover, it is confirmed that the evaluation model derived from this study showed superior explanatory power compared to the previously proposed technology acceptance model.
In Chapter 5, the model was validated again by applying the evaluation measure and the relationship among evaluation factors derived in Chapter 4 to other products. 100 UX experts with expertise in the field of social AI personal assistants and 100 users who use the voice assistant service often, watched two concept videos of the voice assistant service to help users in the onboarding situation of mobile phones and evaluated these concepts. As a result of the evaluation, there is no significant difference in the evaluation results between the UX expert and the real user group, so the structural equation model analysis was conducted using all the data obtained from the UX expert and the real user group. As a result, results similar to those in Chapter 4 are obtained, and it is expected that the model could be generalized to social AI personal assistant products and applied for future research.
This dissertation proposes evaluation measure and relationship among evaluation factors that can be applied when conducting user evaluation in the initial stage of social AI personal assistant development. In addition, case studies using social AI personal assistant products and services were conducted to validate it. With the findings of this study, it is expected that researchers who need to conduct user evaluation to clarify product concepts in the early stages of product development will be able to apply evaluation measures effectively. It is expected that the significance of this dissertation will become clearer if further research is conducted comparing the finished product of social AI personal assistants with the video type stimulus in the early stage of development.λ³Έ λ
Όλ¬Έμ μ΅κ·Ό λΉ λ₯΄κ² λ°μ νκ³ μλ social AI personal assistantμ κ°λ° μ΄κΈ° λ¨κ³μ νμ© κ°λ₯ν μ¬μ©μ νκ° νλͺ©μ κ°λ°νκ³ νκ° νλͺ© κ°μ κ΄κ³λ₯Ό κ²μ¦νλ κ²μ λͺ©νλ‘ νλ€. κ°μΈ λλ°μ΄μ€μ λ°λ¬λ‘ μΈν΄, κ° λλ°μ΄μ€μμ μμ±λλ λ°μ΄ν°κ° νλ°μ μΌλ‘ μ¦κ°νκ³ μκ³ , μ΄λ₯Ό νμ©ν κ°μΈμ© AI μλΉμ€ λ° μ νμ΄ λ€μνκ² μ μλκ³ μλ€. νμ§λ§ κ·Έ κ΄μ¬μ λΉν΄, social AI personal assistant μ νμ μ€μ μμ₯μ μμ§ μ±μνμ§ μμ λ¨κ³μ΄λ€. μ΄λ¬ν μν©μμ μ νμ λΉ λ₯΄κ² νμ°μν€κ³ μΌλ° μλΉμλ€μ΄ μ½κ² μ νμ μμ©ν μ μκ² νκΈ° μν΄μλ, μλΉμμ κΈ°λμ μΈμμ μΆ©λΆν μ΄ν΄νκ³ κ·Έλ₯Ό μΆ©μ‘±μν¬ μ μλ μ νμ κ°λ°νλ κ²μ΄ μ€μνλ€. μ΄μ λ°λΌ λ³Έ μ°κ΅¬μμλ μ ν κ°λ° μ΄κΈ° λ¨κ³μ νμ©ν μ μλ μ¬μ©μ νκ° νλͺ©μ μ μνκ³ νκ° νλͺ© κ° κ΄κ³λ₯Ό λμΆνλ κ²μ λͺ©νλ‘ νλ€.
λ¨Όμ 2μ₯μμλ social AI personal assistantμ νΉμ§, μ ν κ°λ° μ΄κΈ° λ¨κ³μμ μ΄λ£¨μ΄μ§λ μ¬μ©μ νκ°μ μ€μμ± λ° κΈ°μ‘΄ μ¬μ©μ νκ° λͺ¨λΈμ νκ³μ μ μ‘°μ¬νμλ€. κΈ°μ‘΄μ κΈ°μ μμ© λͺ¨λΈ λ° AI personal assistant μ νμ νκ° λͺ¨λΈλ€μ΄ λ€μνκ² μ μλμ΄ μμΌλ, μ ν κ°λ° μ΄κΈ° λ¨κ³μ νμ©ν μ μλ νκ° λͺ¨λΈμ λΆμ‘±νμκ³ , μ ν μ λ°μ νκ°ν μ μλ νκ° λͺ¨λΈμ λΆμ¬λ‘ λλΆλΆμ κΈ°μ‘΄ μ°κ΅¬μμλ λ κ°μ§ μ΄μμ νκ° λͺ¨λΈμ κ²°ν©, μμ νμ¬ μ¬μ©ν κ²μ μ μ μμλ€.
3μ₯μμλ AI personal assistant κ΄λ ¨ κΈ°μ‘΄ μ°κ΅¬μμ νμ©λ νκ° νλͺ©μ κ²ν νμλ€. μ΄ 40κ°μ μ°κ΅¬λ₯Ό 리뷰νμ¬, κΈ°μ‘΄μ νμ©λκ³ μλ νκ° νλͺ©μ μ’
λ₯ λ° νκ³μ μ μμ보μλ€. κ·Έ κ²°κ³Ό, νκ°λ₯Ό μν νλ‘ν νμ
κ°λ°μ΄ μ½μ§ μκΈ°μ μ΄λ―Έ μμ©νλ μ νλ€μ μ΅λν νμ©νλ κ²μ μ μ μμμΌλ©°, μ ν μ λ°μ νκ°ν μ¬λ‘λ λΆμ‘±ν¨μ μ μ μμλ€. λν κΈ°μ‘΄ μ°κ΅¬λ€μ΄ μ¬μ©ν νκ° νλͺ©μ λͺ¨λ μμ§ λ° μ 리νμ¬ μ΄ν μ μν νκ° λͺ¨λΈμ κΈ°λ° μλ£λ‘ νμ©νμλ€. λΆμ κ²°κ³Ό, social AI personal assistantμ λͺ©μ μ κ³ λ €ν΄λ³΄μμ λ, μ¬μ©μμμ μ¬νμ μΈν°λμ
μ ν΅ν΄ μ¬μ©μμ κ°μ μ μΈ λ©΄μ μ±μμ£Όλ μν μ΄ μ€μνμ§λ§, 곡ν΅μ μΌλ‘ νμ©νκ³ μλ κ°μ μ κ°μΉ κ΄λ ¨ νκ° νλͺ©μ΄ λΆμ‘±ν κ²μΌλ‘ λνλ¬λ€.
4μ₯μμλ social AI personal assistant μ ν κ°λ° μ΄κΈ° λ¨κ³μμ νμ© κ°λ₯ν νκ° νλͺ©μ μμ§ λ° μ μνκ³ , νκ° νλͺ©μ νμ©νμ¬ social robotsμ νκ°ν λ€ μ΄λ₯Ό ν΅ν΄ νκ° νλͺ© κ°μ κ΄κ³λ₯Ό λμΆνμλ€. Social robots κ΄λ ¨ μ견μ λ€μνκ² μ²μ·¨νκ³ νκ° νλͺ©μ λμΆνλ νλ‘μΈμ€λ₯Ό μ μνμμΌλ©°, λ³Έ νλ‘μΈμ€λ₯Ό ν΅ν΄ μ΅μ’
μ μ λ νκ° νλͺ©μ μ΄μ©νμ¬, μ΄ 230λͺ
μ΄ μΈ κ°μ§ social robots 컨μ
μμμ νκ°νλ μ¬λ‘ μ°κ΅¬λ₯Ό μ§ννμλ€. νκ° κ²°κ³Ό, μ νμ λν μλΉμ νλλ Utilitarian dimensionκ³Ό Hedonic dimensionμ ν΅ν΄ νμ±λμκ³ , Utilitarian dimension λ΄ μ¬μ©μ± λ° μ ν ν¨μ©μ±, Hedonic dimensionμ ν¬ν¨λλ μ¬λ―Έμ λ§μ‘±λ, μ±κ²©μ 맀λ ₯λ, κ°μ±μ κ°μΉ κ°κ°μ μλ‘ κΈμ μ μΈ μκ΄κ΄κ³λ₯Ό μ§λμ μ μ μμλ€. λν κΈ°μ‘΄μ μ μλ κΈ°μ μμ© λͺ¨λΈ λλΉ λ³Έ μ°κ΅¬μμ λμΆν νκ° λͺ¨λΈμ΄ μ°μν μ€λͺ
λ ₯μ 보μμ νμΈνμλ€.
5μ₯μμλ 4μ₯μμ λμΆλ νκ° λͺ¨λΈμ ν μ νμ μ μ©νμ¬ λͺ¨λΈμ λ€μ νλ² κ²μ¦νμλ€. ν΄λΉ λΆμΌμ μ λ¬Έμ±μ μ§λ UX μ λ¬Έκ° 100λͺ
λ° μμ± λΉμ μλΉμ€λ₯Ό μ€μ μ¬μ©νλ μ€μ¬μ©μ 100λͺ
μ΄, ν΄λν° μ¨λ³΄λ© μν©μμ μ¬μ©μλ₯Ό λμμ£Όλ μμ± λΉμ μλΉμ€μ 컨μ
μμ λ κ°μ§λ₯Ό λ³΄κ³ μ»¨μ
μ λν νκ°λ₯Ό μ§ννμλ€. νκ° κ²°κ³Ό UX μ λ¬Έκ°μ μ€μ¬μ©μ κ·Έλ£Ή κ°μλ νκ° κ²°κ³Όμ μ μλ―Έν μ°¨μ΄λ₯Ό 보μ΄μ§ μμκΈ° λλ¬Έμ, UX μ λ¬Έκ°μ μ€μ¬μ©μ κ·Έλ£Ήμμ μ»μ λ°μ΄ν° μ 체λ₯Ό νμ©νμ¬ κ΅¬μ‘° λ°©μ μ λͺ¨λΈ λΆμμ μ§ννμλ€. κ·Έ κ²°κ³Ό 5μ₯κ³Ό μ μ¬ν μμ€μ κ²°κ³Όλ₯Ό μ»μκ³ , μΆν ν΄λΉ λͺ¨λΈμ social AI personal assistant μ νμ μΌλ°ννμ¬ νμ©ν μ μμ κ²μΌλ‘ νλ¨νμλ€.
λ³Έ λ
Όλ¬Έμ social AI personal assistant κ΄λ ¨ μ ν λ° μλΉμ€μ κ°λ° μ΄κΈ° λ¨κ³μμ μ¬μ©μ νκ°λ₯Ό μ§νν λ νμ© κ°λ₯ν νκ° νλͺ© λ° νκ° νλͺ© κ°μ κ΄κ³λ₯Ό λμΆνμλ€. λν μ΄λ₯Ό κ²μ¦νκΈ° μνμ¬ social AI personal assistant μ ν λ° μλΉμ€λ₯Ό νμ©ν μ¬λ‘μ°κ΅¬λ₯Ό μ§ννμλ€. λ³Έ μ°κ΅¬ κ²°κ³Όλ μΆν μ ν κ°λ° μ΄κΈ° λ¨κ³μμ μ νμ 컨μ
μ λͺ
νν νκΈ° μν μ¬μ©μ νκ°λ₯Ό μ€μν΄μΌ νλ μ°κ΅¬μ§μ΄ ν¨μ¨μ μΌλ‘ νμ©ν μ μμ κ²μΌλ‘ κΈ°λλλ€. μΆν μ΄ λΆλΆμ κ²μ¦μ μν΄, social AI personal assistantsμ μμ νκ³Ό κ°λ° μ΄κΈ° λ¨κ³μ video type stimulusλ₯Ό λΉκ΅νλ μΆκ° μ°κ΅¬κ° μ΄λ£¨μ΄μ§λ€λ©΄ λ³Έ μ°κ΅¬μ μλ―Έλ₯Ό λ³΄λ€ λͺ
ννκ² μ μν μ μμ κ²μΌλ‘ μκ°λλ€.Chapter 1 Introduction 1
1.1 Background and motivation 1
1.1 Research objectives 5
1.2 Dissertation outline 7
Chapter 2 Literature review 9
2.1 Social AI personal assistant 9
2.2 User centered design process 13
2.3 Technology acceptance models 16
2.4 Evaluation measures for social AI personal assistant 22
2.5 Existing evaluation methodologies for social AI personal assistant 27
Chapter 3 Collection of existing evaluation measures for social AI personal assistants 40
3.1 Background 40
3.2 Methodology 43
3.3 Result 51
3.4 Discussion 60
Chapter 4 Development of an evaluation model for social AI personal assistants 63
4.1 Background 63
4.2 Methodology 66
4.2.1 Developing evaluation measures for social AI personal assistants 68
4.2.2 Conducting user evaluation for social robots 74
4.3 Result 77
4.3.1 Descriptive statistics 77
4.3.2 Hypothesis development and testing 80
4.3.3 Comparison with existing technology acceptance models 88
4.4 Discussion 93
Chapter 5 Verification of an evaluation model with voice assistant services 95
5.1 Background 95
5.2 Methodology 98
5.2.1 Design of evaluation questionnaires for voice assistant services 99
5.2.2 Validation of relationship among evaluation factors 103
5.3 Result 108
5.3.1 Descriptive statistics 108
5.3.2 Hypothesis development and testing 111
5.3.3 Comparison with existing technology acceptance models 118
5.4 Discussion 121
Chapter 6 Conclusion 124
6.1 Summary of this study 124
6.2 Contribution of this study 126
6.3 Limitation and future work 128
Bibliography 129
Appendix A. Evaluation measures for social AI personal assistant collected in Chapter 4 146
Appendix B. Questionnaires for evaluation of social robots 154
Appendix C. Questionnaires for evaluation of voice assistant service 166λ°
A Corroborative Approach to Verification and Validation of Human--Robot Teams
We present an approach for the verification and validation (V&V) of robot assistants in the context of human-robot interactions (HRI), to demonstrate their trustworthiness through corroborative evidence of their safety and functional correctness. Key challenges include the complex and unpredictable nature of the real world in which assistant and service robots operate, the limitations on available V&V techniques when used individually, and the consequent lack of confidence in the V&V results. Our approach, called corroborative V&V, addresses these challenges by combining several different V&V techniques; in this paper we use formal verification (model checking), simulation-based testing, and user validation in experiments with a real robot. We demonstrate our corroborative V&V approach through a handover task, the most critical part of a complex cooperative manufacturing scenario, for which we propose some safety and liveness requirements to verify and validate. We construct formal models, simulations and an experimental test rig for the HRI. To capture requirements we use temporal logic properties, assertion checkers and textual descriptions. This combination of approaches allows V&V of the HRI task at different levels of modelling detail and thoroughness of exploration, thus overcoming the individual limitations of each technique. Should the resulting V&V evidence present discrepancies, an iterative process between the different V&V techniques takes place until corroboration between the V&V techniques is gained from refining and improving the assets (i.e., system and requirement models) to represent the HRI task in a more truthful manner. Therefore, corroborative V&V affords a systematic approach to 'meta-V&V,' in which different V&V techniques can be used to corroborate and check one another, increasing the level of certainty in the results of V&V