5,520 research outputs found
Integrating Flow Theory and Adaptive Robot Roles: A Conceptual Model of Dynamic Robot Role Adaptation for the Enhanced Flow Experience in Long-term Multi-person Human-Robot Interactions
In this paper, we introduce a novel conceptual model for a robot's behavioral
adaptation in its long-term interaction with humans, integrating dynamic robot
role adaptation with principles of flow experience from psychology. This
conceptualization introduces a hierarchical interaction objective grounded in
the flow experience, serving as the overarching adaptation goal for the robot.
This objective intertwines both cognitive and affective sub-objectives and
incorporates individual and group-level human factors. The dynamic role
adaptation approach is a cornerstone of our model, highlighting the robot's
ability to fluidly adapt its support roles - from leader to follower - with the
aim of maintaining equilibrium between activity challenge and user skill,
thereby fostering the user's optimal flow experiences. Moreover, this work
delves into a comprehensive exploration of the limitations and potential
applications of our proposed conceptualization. Our model places a particular
emphasis on the multi-person HRI paradigm, a dimension of HRI that is both
under-explored and challenging. In doing so, we aspire to extend the
applicability and relevance of our conceptualization within the HRI field,
contributing to the future development of adaptive social robots capable of
sustaining long-term interactions with humans
Selecting Metrics to Evaluate Human Supervisory Control Applications
The goal of this research is to develop a methodology to select supervisory control metrics. This
methodology is based on cost-benefit analyses and generic metric classes. In the context of this research,
a metric class is defined as the set of metrics that quantify a certain aspect or component of a system.
Generic metric classes are developed because metrics are mission-specific, but metric classes are
generalizable across different missions. Cost-benefit analyses are utilized because each metric set has
advantages, limitations, and costs, thus the added value of different sets for a given context can be
calculated to select the set that maximizes value and minimizes costs. This report summarizes the
findings of the first part of this research effort that has focused on developing a supervisory control metric
taxonomy that defines generic metric classes and categorizes existing metrics. Future research will focus
on applying cost benefit analysis methodologies to metric selection.
Five main metric classes have been identified that apply to supervisory control teams composed
of humans and autonomous platforms: mission effectiveness, autonomous platform behavior efficiency,
human behavior efficiency, human behavior precursors, and collaborative metrics. Mission effectiveness
measures how well the mission goals are achieved. Autonomous platform and human behavior efficiency
measure the actions and decisions made by the humans and the automation that compose the team.
Human behavior precursors measure human initial state, including certain attitudes and cognitive
constructs that can be the cause of and drive a given behavior. Collaborative metrics address three
different aspects of collaboration: collaboration between the human and the autonomous platform he is
controlling, collaboration among humans that compose the team, and autonomous collaboration among
platforms. These five metric classes have been populated with metrics and measuring techniques from
the existing literature.
Which specific metrics should be used to evaluate a system will depend on many factors, but as a
rule-of-thumb, we propose that at a minimum, one metric from each class should be used to provide a
multi-dimensional assessment of the human-automation team. To determine what the impact on our
research has been by not following such a principled approach, we evaluated recent large-scale
supervisory control experiments conducted in the MIT Humans and Automation Laboratory. The results
show that prior to adapting this metric classification approach, we were fairly consistent in measuring
mission effectiveness and human behavior through such metrics as reaction times and decision
accuracies. However, despite our supervisory control focus, we were remiss in gathering attention
allocation metrics and collaboration metrics, and we often gathered too many correlated metrics that were
redundant and wasteful. This meta-analysis of our experimental shortcomings reflect those in the general
research population in that we tended to gravitate to popular metrics that are relatively easy to gather,
without a clear understanding of exactly what aspect of the systems we were measuring and how the
various metrics informed an overall research question
From Artificial Intelligence (AI) to Intelligence Augmentation (IA): Design Principles, Potential Risks, and Emerging Issues
We typically think of artificial intelligence (AI) as focusing on empowering machines with human capabilities so that they can function on their own, but, in truth, much of AI focuses on intelligence augmentation (IA), which is to augment human capabilities. We propose a framework for designing intelligent augmentation (IA) systems and it addresses six central questions about IA: why, what, who/whom, how, when, and where. To address the how aspect, we introduce four guiding principles: simplification, interpretability, human-centeredness, and ethics. The what aspect includes an IA architecture that goes beyond the direct interactions between humans and machines by introducing their indirect relationships through data and domain. The architecture also points to the directions for operationalizing the IA design simplification principle. We further identify some potential risks and emerging issues in IA design and development to suggest new questions for future IA research and to foster its positive impact on humanity
Advances in Human-Robot Interaction
Rapid advances in the field of robotics have made it possible to use robots not just in industrial automation but also in entertainment, rehabilitation, and home service. Since robots will likely affect many aspects of human existence, fundamental questions of human-robot interaction must be formulated and, if at all possible, resolved. Some of these questions are addressed in this collection of papers by leading HRI researchers
A theoretical and practical approach to a persuasive agent model for change behaviour in oral care and hygiene
There is an increased use of the persuasive agent in behaviour change interventions due to the agentâs features of sociable, reactive, autonomy, and proactive. However, many interventions have been unsuccessful, particularly in the domain of oral care. The psychological reactance has been identified as one of the major reasons for these
unsuccessful behaviour change interventions. This study proposes a formal persuasive agent model that leads to psychological reactance reduction in order to achieve an improved behaviour change intervention in oral care and hygiene. Agent-based
simulation methodology is adopted for the development of the proposed model. Evaluation of the model was conducted in two phases that include verification and validation. The verification process involves simulation trace and stability analysis. On the other hand, the validation was carried out using user-centred approach by developing an agent-based application based on belief-desire-intention architecture. This study
contributes an agent model which is made up of interrelated cognitive and behavioural factors. Furthermore, the simulation traces provide some insights on the interactions among the identified factors in order to comprehend their roles in behaviour change intervention. The simulation result showed that as time increases, the psychological reactance decreases towards zero. Similarly, the model validation result showed that the percentage of respondentsâ who experienced psychological reactance towards behaviour
change in oral care and hygiene was reduced from 100 percent to 3 percent. The contribution made in this thesis would enable agent application and behaviour change intervention designers to make scientific reasoning and predictions. Likewise, it provides a guideline for software designers on the development of agent-based applications that
may not have psychological reactance
Look Who's Talking Now: Implications of AV's Explanations on Driver's Trust, AV Preference, Anxiety and Mental Workload
Explanations given by automation are often used to promote automation
adoption. However, it remains unclear whether explanations promote acceptance
of automated vehicles (AVs). In this study, we conducted a within-subject
experiment in a driving simulator with 32 participants, using four different
conditions. The four conditions included: (1) no explanation, (2) explanation
given before or (3) after the AV acted and (4) the option for the driver to
approve or disapprove the AV's action after hearing the explanation. We
examined four AV outcomes: trust, preference for AV, anxiety and mental
workload. Results suggest that explanations provided before an AV acted were
associated with higher trust in and preference for the AV, but there was no
difference in anxiety and workload. These results have important implications
for the adoption of AVs.Comment: 42 pages, 5 figures, 3 Table
Mitigating User Frustration through Adaptive Feedback based on Human-Automation Etiquette Strategies
The objective of this study is to investigate the effects of feedback and user frustration in human-computer interaction (HCI) and examine how to mitigate user frustration through feedback based on human-automation etiquette strategies. User frustration in HCI indicates a negative feeling that occurs when efforts to achieve a goal are impeded. User frustration impacts not only the communication with the computer itself, but also productivity, learning, and cognitive workload. Affect-aware systems have been studied to recognize user emotions and respond in different ways. Affect-aware systems need to be adaptive systems that change their behavior depending on usersâ emotions. Adaptive systems have four categories of adaptations. Previous research has focused on primarily function allocation and to a lesser extent information content and task scheduling. However, the fourth approach, changing the interaction styles is the least explored because of the interplay of human factors considerations. Three interlinked studies were conducted to investigate the consequences of user frustration and explore mitigation techniques. Study 1 showed that delayed feedback from the system led to higher user frustration, anger, cognitive workload, and physiological arousal. In addition, delayed feedback decreased task performance and system usability in a human-robot interaction (HRI) context. Study 2 evaluated a possible approach of mitigating user frustration by applying human-human etiquette strategies in a tutoring context. The results of Study 2 showed that changing etiquette strategies led to changes in performance, motivation, confidence, and satisfaction. The most effective etiquette strategies changed when users were frustrated. Based on these results, an adaptive tutoring system prototype was developed and evaluated in Study 3. By utilizing a rule set derived from Study 2, the tutor was able to use different automation etiquette strategies to target and improve motivation, confidence, satisfaction, and performance using different strategies, under different levels of user frustration. This work establishes that changing the interaction style alone of a computer tutor can affect a userâs motivation, confidence, satisfaction, and performance. Furthermore, the beneficial effect of changing etiquette strategies is greater when users are frustrated. This work provides a basis for future work to develop affect-aware adaptive systems to mitigate user frustration
Do Androids Dream of Bad News?
Breaking bad news is one of the toughest things to do in any field dealing with client care. As automation and technology increasingly interweave with human experience, there is growing concern about whether automated agents (ââAAsâ) would be adequate to perform such a complex emotional act. In this paper, I draw from the literature in psychology and computer science to understand how individuals might react to automated agents (AAs) and address some of the strengths and limitations of AAs. I raise several legal and empirical issues that future designers and users of AAs must consider, including disclosure of and liability for an AAâs presence
Distributed Dynamic Hierarchical Task Assignment for Human-Robot Teams
This work implements a joint task architecture for human-robot collaborative task execution using a hierarchical task planner. This architecture allowed humans and robots to work together as teammates in the same environment while following several task constraints. These constraints are 1) sequential order, 2) non-sequential, and 3) alternative execution constraints. Both the robot and the human are aware of each other's current state and allocate their next task based on the task tree. On-table tasks, such as setting up a tea table or playing a color sequence matching game, validate the task architecture. The robot will have an updated task representation of its human teammate's task. Using this knowledge, it is also able to continuously detect the human teammate's intention towards each sub-task and coordinate it with the teammate. While performing a joint task, there can be situations in which tasks overlap or do not overlap. We designed a dialogue-based conversation between humans and robots to resolve conflict in the case of overlapping tasks.Evaluating the human-robot task architecture is the next concern after validating the task architecture. Trust and trustworthiness are some of the most critical metrics to explore. A study was conducted between humans and robots to create a homophily situation. Homophily means when a person feels biased towards another person because of having similarities in social ways. We conducted this study to determine whether humans can form a homophilic relationship with robots and whether there is a connection between homophily and trust. We found a correlation between homophily and trust in human-robot interactions.Furthermore, we designed a pipeline by which the robot learns a task by observing the human teammate's hand movement while conversing. The robot then constructs the tree by itself using a GA learning framework. Thus removing the need for manual specification by a programmer each time to revise or update the task tree which makes the architecture more flexible, realistic, efficient, and dynamic. Additionally, our architecture allows the robot to comprehend the context of a situation by conversing with a human teammate and observing the surroundings. The robot can find a link between the context of the situation and the surrounding objects by using the ontology approach and can perform the desired task accordingly. Therefore, we proposed a human-robot distributed joint task management architecture that addresses design, improvement, and evaluation under multiple constraints
A Survey of Multi-Agent Human-Robot Interaction Systems
This article presents a survey of literature in the area of Human-Robot
Interaction (HRI), specifically on systems containing more than two agents
(i.e., having multiple humans and/or multiple robots). We identify three core
aspects of ``Multi-agent" HRI systems that are useful for understanding how
these systems differ from dyadic systems and from one another. These are the
Team structure, Interaction style among agents, and the system's Computational
characteristics. Under these core aspects, we present five attributes of HRI
systems, namely Team size, Team composition, Interaction model, Communication
modalities, and Robot control. These attributes are used to characterize and
distinguish one system from another. We populate resulting categories with
examples from recent literature along with a brief discussion of their
applications and analyze how these attributes differ from the case of dyadic
human-robot systems. We summarize key observations from the current literature,
and identify challenges and promising areas for future research in this domain.
In order to realize the vision of robots being part of the society and
interacting seamlessly with humans, there is a need to expand research on
multi-human -- multi-robot systems. Not only do these systems require
coordination among several agents, they also involve multi-agent and indirect
interactions which are absent from dyadic HRI systems. Adding multiple agents
in HRI systems requires advanced interaction schemes, behavior understanding
and control methods to allow natural interactions among humans and robots. In
addition, research on human behavioral understanding in mixed human-robot teams
also requires more attention. This will help formulate and implement effective
robot control policies in HRI systems with large numbers of heterogeneous
robots and humans; a team composition reflecting many real-world scenarios.Comment: 23 pages, 7 figure
- âŠ