3 research outputs found
Metrics to Evaluate Human Teaching Engagement From a Robot's Point of View
This thesis was motivated by a study of how robots can be taught by humans, with an
emphasis on allowing persons without programming skills to teach robots. The focus of this
thesis was to investigate what criteria could or should be used by a robot to evaluate
whether a human teacher is (or potentially could be) a good teacher in robot learning by
demonstration. In effect, choosing the teacher that can maximize the benefit to the robot
using learning by imitation/demonstration.
The study approached this topic by taking a technology snapshot in time to see if a
representative example of research laboratory robot technology is capable of assessing
teaching quality. With this snapshot, this study evaluated how humans observe teaching
quality to attempt to establish measurement metrics that can be transferred as rules or
algorithms that are beneficial from a robot’s point of view.
To evaluate teaching quality, the study looked at the teacher-student relationship from a
human-human interaction perspective. Two factors were considered important in defining a
good teacher: engagement and immediacy. The study gathered more literature reviews
relating to further detailed elements of engagement and immediacy. The study also tried to
link physical effort as a possible metric that could be used to measure the level of
engagement of the teachers.
An investigatory experiment was conducted to evaluate which modality the participants
prefer to employ in teaching a robot if the robot can be taught using voice, gesture
demonstration, or physical manipulation. The findings from this experiment suggested that
the participants appeared to have no preference in terms of human effort for completing
the task. However, there was a significant difference in human enjoyment preferences of
input modality and a marginal difference in the robot’s perceived ability to imitate.
A main experiment was conducted to study the detailed elements that might be used by a
robot in identifying a “good” teacher. The main experiment was conducted in two subexperiments.
The first part recorded the teacher’s activities and the second part analysed
how humans evaluate the perception of engagement when assessing another human
teaching a robot. The results from the main experiment suggested that in human teaching of
a robot (human-robot interaction), humans (the evaluators) also look for some immediacy
cues that happen in human-human interaction for evaluating the engagement
Recommended from our members
R&D for computational cognitive and social models : foundations for model evaluation through verification and validation (final LDRD report).
Sandia National Laboratories is investing in projects that aim to develop computational modeling and simulation applications that explore human cognitive and social phenomena. While some of these modeling and simulation projects are explicitly research oriented, others are intended to support or provide insight for people involved in high consequence decision-making. This raises the issue of how to evaluate computational modeling and simulation applications in both research and applied settings where human behavior is the focus of the model: when is a simulation 'good enough' for the goals its designers want to achieve? In this report, we discuss two years' worth of review and assessment of the ASC program's approach to computational model verification and validation, uncertainty quantification, and decision making. We present a framework that extends the principles of the ASC approach into the area of computational social and cognitive modeling and simulation. In doing so, we argue that the potential for evaluation is a function of how the modeling and simulation software will be used in a particular setting. In making this argument, we move from strict, engineering and physics oriented approaches to V&V to a broader project of model evaluation, which asserts that the systematic, rigorous, and transparent accumulation of evidence about a model's performance under conditions of uncertainty is a reasonable and necessary goal for model evaluation, regardless of discipline. How to achieve the accumulation of evidence in areas outside physics and engineering is a significant research challenge, but one that requires addressing as modeling and simulation tools move out of research laboratories and into the hands of decision makers. This report provides an assessment of our thinking on ASC Verification and Validation, and argues for further extending V&V research in the physical and engineering sciences toward a broader program of model evaluation in situations of high consequence decision-making