62 research outputs found
Analysis of human-robot spatial behaviour applying a qualitative trajectory calculus
The analysis and understanding of human-robot joint spatial behaviour (JSB) such as guiding, approaching, departing, or coordinating movements in narrow spaces and its communicative and dynamic aspects are key requirements on the road towards more intuitive interaction, safe encounter, and appealing living with mobile robots. This endeavours demand for appropriate models and methodologies to represent JSB and facilitate its analysis. In this paper, we adopt a qualitative trajectory calculus (QTC) as a formal foundation for the analysis and representation of such spatial behaviour of a human and a robot based on a compact encoding of the relative trajectories of two interacting agents in a sequential model. We present this QTC together with a distance measure and a probabilistic behaviour model and outline its usage in an actual JSB study.We argue that the proposed QTC coding scheme and derived methodologies for analysis and modelling are flexible and extensible to be adapted for a variety of other scenarios and studies. I
The Impact of Social Expectation towards Robots on Human-Robot Interactions
This work is presented in defence of the thesis that it is possible to measure the social expectations and perceptions that humans have of robots in an explicit and succinct manner, and these measures are related to how humans interact with, and evaluate, these robots. There are many ways of understanding how humans may respond to, or reason about, robots as social actors, but the approach that was adopted within this body of work was one which focused on interaction-specific expectations, rather than expectations regarding the true nature of the robot. These expectations were investigated using a questionnaire-based tool, the University of Hertfordshire Social Roles Questionnaire, which was developed as part of the work presented in this thesis and tested on a sample of 400 visitors to an exhibition in the Science Gallery in Dublin. This study suggested that responses to this questionnaire loaded on two main dimensions, one which related to the degree of social equality the participants expected the interactions with the robots to have, and the other was related to the degree of control they expected to exert upon the robots within the interaction. A single item, related to pet-like interactions, loaded on both and was considered a separate, third dimension.
This questionnaire was deployed as part of a proxemics study, which found that the degree to which participants accepted particular proxemics behaviours was correlated with initial social expectations of the robot. If participants expected the robot to be more of a social equal, then the participants preferred the robot to approach from the front, while participants who viewed the robot more as a tool preferred it to approach from a less obtrusive angle.
The questionnaire was also deployed in two long-term studies. In the first study, which involved one interaction a week over a period of two months, participant social expectations of the robots prior to the beginning of the study, not only impacted how participants evaluated open-ended interactions with the robots throughout the two-month period, but also how they collaborated with the robots in task-oriented interactions as well. In the second study, participants interacted with the robots twice a week over a period of 6 weeks. This study replicated the findings of the previous study, in that initial expectations impacted evaluations of interactions throughout the long-term study. In addition, this study used the questionnaire to measure post-interaction perceptions of the robots in terms of social expectations. The results from these suggest that while initial social expectations of robots impact how participants evaluate the robots in terms of interactional outcomes, social perceptions of robots are more closely related to the social/affective experience of the interaction
Human-Machine Communication: Complete Volume. Volume 6
his is the complete volume of HMC Volume 6
Staying engaged in child-robot interaction:A quantitative approach to studying preschoolers’ engagement with robots and tasks during second-language tutoring
Inleiding Covid-19 heeft laten zien dat onze traditionele manier van lesgeven steeds meer afhankelijk is van digitale hulpmiddelen. In de afgelopen jaren (2020-2021) hebben leerkrachten kinderen online les moeten geven en hebben ouders hun kinderen moeten begeleiden bij hun lesactiviteiten. Digitale instrumenten die het onderwijs kunnen ondersteunen zoals sociale robots, zouden uiterst nuttig zijn geweest voor leerkrachten. Robots die, in tegenstelling tot tablets, hun lichaam kunnen gebruiken om zich vergelijkbaar te gedragen als leerkrachten. Bijvoorbeeld door te gebaren tijdens het praten, waardoor kinderen zich beter kunnen concentreren wat een voordeel oplevert voor hun leerprestaties. Bovendien stellen robots, meer dan tablets, kinderen in staat tot een sociale interactie, wat vooral belangrijk is bij het leren van een tweede taal (L2). Hierover ging mijn promotietraject wat onderdeel was van het Horizon 2020 L2TOR project1, waarin zes verschillende universiteiten en twee bedrijven samenwerkten en onderzochten of een robot aan kleuters woorden uit een tweede taal kon leren. Een van de belangrijkste vragen in dit project was hoe we gedrag van de robot konden ontwikkelen dat kinderen betrokken (engaged) houdt. Betrokkenheid van kinderen is belangrijk zodat zij tijdens langere tijdsperiodes met de robot aan de slag willen. Om deze vraag te beantwoorden, heb ik meerdere studies uitgevoerd om het effect van de robot op de betrokkenheid van kinderen met de robot te onderzoeken, alsmede onderzoek te doen naar de perceptie die de kinderen van de robot hadden. 1Het L2TOR project leverde een grote bijdrage binnen het mens-robot interactie veld in de beweging richting publieke wetenschap. Alle L2TOR publicaties, de project deliverables, broncode en data zijn openbaar gemaakt via de website www.l2tor.eu en via www.github.nl/l2tor en de meeste studies werden vooraf geregistreerd
Human-Machine Communication: Complete Volume. Volume 1
This is the complete volume of HMC Volume 1
Human-robot spatial interaction using probabilistic qualitative representations
Current human-aware navigation approaches use a predominantly metric representation
of the interaction which makes them susceptible to changes in the environment. In order
to accomplish reliable navigation in ever-changing human populated environments, the
presented work aims to abstract from the underlying metric representation by using Qualitative
Spatial Relations (QSR), namely the Qualitative Trajectory Calculus (QTC), for
Human-Robot Spatial Interaction (HRSI). So far, this form of representing HRSI has been
used to analyse different types of interactions online. This work extends this representation
to be able to classify the interaction type online using incrementally updated QTC
state chains, create a belief about the state of the world, and transform this high-level
descriptor into low-level movement commands. By using QSRs the system becomes invariant
to change in the environment, which is essential for any form of long-term deployment
of a robot, but most importantly also allows the transfer of knowledge between similar
encounters in different environments to facilitate interaction learning. To create a robust
qualitative representation of the interaction, the essence of the movement of the human in
relation to the robot and vice-versa is encoded in two new variants of QTC especially designed
for HRSI and evaluated in several user studies. To enable interaction learning and
facilitate reasoning, they are employed in a probabilistic framework using Hidden Markov
Models (HMMs) for online classiffication and evaluation of their appropriateness for the
task of human-aware navigation.
In order to create a system for an autonomous robot, a perception pipeline for the
detection and tracking of humans in the vicinity of the robot is described which serves
as an enabling technology to create incrementally updated QTC state chains in real-time
using the robot's sensors. Using this framework, the abstraction and generalisability of the
QTC based framework is tested by using data from a different study for the classiffication
of automatically generated state chains which shows the benefits of using such a highlevel
description language. The detriment of using qualitative states to encode interaction
is the severe loss of information that would be necessary to generate behaviour from it.
To overcome this issue, so-called Velocity Costmaps are introduced which restrict the
sampling space of a reactive local planner to only allow the generation of trajectories
that correspond to the desired QTC state. This results in a
exible and agile behaviour
I
generation that is able to produce inherently safe paths. In order to classify the current
interaction type online and predict the current state for action selection, the HMMs are
evolved into a particle filter especially designed to work with QSRs of any kind. This
online belief generation is the basis for a
exible action selection process that is based on
data acquired using Learning from Demonstration (LfD) to encode human judgement into
the used model. Thereby, the generated behaviour is not only sociable but also legible
and ensures a high experienced comfort as shown in the experiments conducted. LfD
itself is a rather underused approach when it comes to human-aware navigation but is
facilitated by the qualitative model and allows exploitation of expert knowledge for model
generation. Hence, the presented work bridges the gap between the speed and
exibility
of a sampling based reactive approach by using the particle filter and fast action selection,
and the legibility of deliberative planners by using high-level information based on expert
knowledge about the unfolding of an interaction
Recommended from our members
Privacy-Sensitive Robotics
This dissertation focuses on personal privacy in human-robot interaction, which we call "privacy-sensitive robotics." Our understanding of "privacy" is very broad, including not just information privacy but also physical, psychological, and social privacy. We begin by surveying the scholarly literature on privacy and talking about why it applies to interactions with robots. We then make five contributions to help launch privacy-sensitive robotics as an emerging area of research --- one from a literature review, three from empirical studies, and one about the future of privacy-sensitive robotics research:
1. We begin by presenting the current state of the art in privacy protection technologies (whether or not they were designed as such) from the literature.
2. Our first study found differences in usability and user preference between three different interfaces for specifying user privacy preferences to a robot.
3. Our next study showed how the contextual "framing" of an action affects whether people see it as a privacy violation.
4. Our third and final study documents the process of forming beliefs about the robot's sensing capabilities and identifies some key aspects of this process for further study.
5. Finally, we give a set of recommendations for developing privacy-sensitive robotics as a research area.
These five contributions are linked by the goal of privacy-sensitive robotics research: to enable a future in which robotics technology upholds and respects our privacy. We close with a call to action for potential privacy-sensitive robotics researchers
- …