14 research outputs found
Modeling Group Dynamics for Personalized Robot-Mediated Interactions
The field of human-human-robot interaction (HHRI) uses social robots to
positively influence how humans interact with each other. This objective
requires models of human understanding that consider multiple humans in an
interaction as a collective entity and represent the group dynamics that exist
within it. Understanding group dynamics is important because these can
influence the behaviors, attitudes, and opinions of each individual within the
group, as well as the group as a whole. Such an understanding is also useful
when personalizing an interaction between a robot and the humans in its
environment, where a group-level model can facilitate the design of robot
behaviors that are tailored to a given group, the dynamics that exist within
it, and the specific needs and preferences of the individual interactants. In
this paper, we highlight the need for group-level models of human understanding
in human-human-robot interaction research and how these can be useful in
developing personalization techniques. We survey existing models of group
dynamics and categorize them into models of social dominance, affect, social
cohesion, and conflict resolution. We highlight the important features these
models utilize, evaluate their potential to capture interpersonal aspects of a
social interaction, and highlight their value for personalization techniques.
Finally, we identify directions for future work, and make a case for models of
relational affect as an approach that can better capture group-level
understanding of human-human interactions and be useful in personalizing
human-human-robot interactions
Tactile sensing: a machine learning approach
This thesis addresses the problem of tactile sensing in a robot. We
construct an artificial finger and use machine learning to acquire the
ability to recognise textures and predict slip. The finger has randomly
distributed strain gauges and polyvinylidene fluoride (PVDF) films em-
bedded in silicone.
In the texture recognition task, several machine learning algorithms
such as naive Bayes, decision trees, and naive Bayes trees have been
trained to distinguish materials sensed by the artificial finger. Different
textures induce different intensities of vibrations in the silicone. Conse-
quently, textures can be distinguished by the presence of different fre-
quencies in the signal. The data from the finger are preprocessed and
the Fourier coefficients of the sensor outputs are used to train classifiers.
We show that the learned classifiers generalise well for unseen datasets.
Our classifiers can distinguish between different materials such as carpet,
flooring vinyls, tiles, sponge, wood and polyvinyl-chloride (PVC) woven
mesh with an accuracy of 95 _ 4%.
In the slip prediction task, we predict a slip by studying temporal
patterns in the multidimensional time-series data about the finger-object
contact. The multidimensional time-series is analysed using probabilistic clustering that transforms the data into a sequence of symbols that is used to train a hidden Markov model (HMM) classifier. Experimental results show that the classifier can predict a slip, at least 100ms before the slip takes place, with an accuracy of 96% on unseen datasets
ViHOPE: Visuotactile In-Hand Object 6D Pose Estimation with Shape Completion
In this letter, we introduce ViHOPE, a novel framework for estimating the 6D
pose of an in-hand object using visuotactile perception. Our key insight is
that the accuracy of the 6D object pose estimate can be improved by explicitly
completing the shape of the object. To this end, we introduce a novel
visuotactile shape completion module that uses a conditional Generative
Adversarial Network to complete the shape of an in-hand object based on
volumetric representation. This approach improves over prior works that
directly regress visuotactile observations to a 6D pose. By explicitly
completing the shape of the in-hand object and jointly optimizing the shape
completion and pose estimation tasks, we improve the accuracy of the 6D object
pose estimate. We train and test our model on a synthetic dataset and compare
it with the state-of-the-art. In the visuotactile shape completion task, we
outperform the state-of-the-art by 265% using the Intersection of Union metric
and achieve 88% lower Chamfer Distance. In the visuotactile pose estimation
task, we present results that suggest our framework reduces position and
angular errors by 35% and 64%, respectively. Furthermore, we ablate our
framework to confirm the gain on the 6D object pose estimate from explicitly
completing the shape. Ultimately, we show that our framework produces models
that are robust to sim-to-real transfer on a real-world robot platform.Comment: Accepted by RA-
Hierarchical Graph Neural Networks for Proprioceptive 6D Pose Estimation of In-hand Objects
Robotic manipulation, in particular in-hand object manipulation, often
requires an accurate estimate of the object's 6D pose. To improve the accuracy
of the estimated pose, state-of-the-art approaches in 6D object pose estimation
use observational data from one or more modalities, e.g., RGB images, depth,
and tactile readings. However, existing approaches make limited use of the
underlying geometric structure of the object captured by these modalities,
thereby, increasing their reliance on visual features. This results in poor
performance when presented with objects that lack such visual features or when
visual features are simply occluded. Furthermore, current approaches do not
take advantage of the proprioceptive information embedded in the position of
the fingers. To address these limitations, in this paper: (1) we introduce a
hierarchical graph neural network architecture for combining multimodal (vision
and touch) data that allows for a geometrically informed 6D object pose
estimation, (2) we introduce a hierarchical message passing operation that
flows the information within and across modalities to learn a graph-based
object representation, and (3) we introduce a method that accounts for the
proprioceptive information for in-hand object representation. We evaluate our
model on a diverse subset of objects from the YCB Object and Model Set, and
show that our method substantially outperforms existing state-of-the-art work
in accuracy and robustness to occlusion. We also deploy our proposed framework
on a real robot and qualitatively demonstrate successful transfer to real
settings
"How Did They Come Across?" Lessons Learned from Continuous Affective Ratings
Social distance, or perception of the other, is recognized as a dynamic
dimension of an interaction, but yet to be widely explored or understood.
Through CORAE, a novel web-based open-source tool for COntinuous Retrospective
Affect Evaluation, we collected retrospective ratings of interpersonal
perceptions between 12 participant dyads. In this work, we explore how
different aspects of these interactions reflect on the ratings collected,
through a discourse analysis of individual and social behavior of the
interactants. We found that different events observed in the ratings can be
mapped to complex interaction phenomena, shedding light on relevant interaction
features that may play a role in interpersonal understanding and grounding.
This paves the way for better, more seamless human-robot interactions, where
affect is interpreted as highly dynamic and contingent on interaction history.Comment: arXiv admin note: substantial text overlap with arXiv:2306.1662
CORAE: A Tool for Intuitive and Continuous Retrospective Evaluation of Interactions
This paper introduces CORAE, a novel web-based open-source tool for
COntinuous Retrospective Affect Evaluation, designed to capture continuous
affect data about interpersonal perceptions in dyadic interactions. Grounded in
behavioral ecology perspectives of emotion, this approach replaces valence as
the relevant rating dimension with approach and withdrawal, reflecting the
degree to which behavior is perceived as increasing or decreasing social
distance. We conducted a study to experimentally validate the efficacy of our
platform with 24 participants. The tool's effectiveness was tested in the
context of dyadic negotiation, revealing insights about how interpersonal
dynamics evolve over time. We find that the continuous affect rating method is
consistent with individuals' perception of the overall interaction. This paper
contributes to the growing body of research on affective computing and offers a
valuable tool for researchers interested in investigating the temporal dynamics
of affect and emotion in social interactions