139 research outputs found
Affect Recognition in Hand-Object Interaction Using Object-sensed Tactile and Kinematic Data
We investigate the recognition of the affective states of a person performing an action with an object, by processing the object-sensed data. We focus on sequences of basic actions such as grasping and rotating, which are constituents of daily-life interactions. iCube, a 5 cm cube, was used to collect tactile and kinematics data that consist of tactile maps (without information on the pressure applied to the surface), and rotations. We conduct two studies: classification of i) emotions and ii) the vitality forms. In both, the participants perform a semi-structured task composed of basic actions. For emotion recognition, 237 trials by 11 participants associated with anger, sadness, excitement, and gratitude were used to train models using 10 hand-crafted features. The classifier accuracy reaches up to 82.7%. Interestingly, the same classifier when learned exclusively with the tactile data performs on par with its counterpart modeled with all 10 features. For the second study, 1135 trials by 10 participants were used to classify two vitality forms. The best-performing model differentiated gentle actions from rude ones with an accuracy of 84.85%. The results also confirm that people touch objects differently when performing these basic actions with different affective states and attitudes
Perception is Only Real When Shared: A Mathematical Model for Collaborative Shared Perception in Human-Robot Interaction
Partners have to build a shared understanding of their environment in everyday collaborative tasks by aligning their perceptions and establishing a common ground. This is one of the aims of shared perception: revealing characteristics of the individual perception to others with whom we share the same environment. In this regard, social cognitive processes, such as joint attention and perspective-taking, form a shared perception. From a Human-Robot Interaction (HRI) perspective, robots would benefit from the ability to establish shared perception with humans and a common understanding of the environment with their partners. In this work, we wanted to assess whether a robot, considering the differences in perception between itself and its partner, could be more effective in its helping role and to what extent this improves task completion and the interaction experience. For this purpose, we designed a mathematical model for a collaborative shared perception that aims to maximise the collaborators’ knowledge of the environment when there are asymmetries in perception. Moreover, we instantiated and tested our model via a real HRI scenario. The experiment consisted of a cooperative game in which participants had to build towers of Lego bricks, while the robot took the role of a suggester. In particular, we conducted experiments using two different robot behaviours. In one condition, based on shared perception, the robot gave suggestions by considering the partners’ point of view and using its inference about their common ground to select the most informative hint. In the other condition, the robot just indicated the brick that would have yielded a higher score from its individual perspective. The adoption of shared perception in the selection of suggestions led to better performances in all the instances of the game where the visual information was not a priori common to both agents. However, the subjective evaluation of the robot’s behaviour did not change between conditions
Shared perception is different from individual perception: a new look on context dependency
Human perception is based on unconscious inference, where sensory input
integrates with prior information. This phenomenon, known as context
dependency, helps in facing the uncertainty of the external world with
predictions built upon previous experience. On the other hand, human perceptual
processes are inherently shaped by social interactions. However, how the
mechanisms of context dependency are affected is to date unknown. If using
previous experience - priors - is beneficial in individual settings, it could
represent a problem in social scenarios where other agents might not have the
same priors, causing a perceptual misalignment on the shared environment. The
present study addresses this question. We studied context dependency in an
interactive setting with a humanoid robot iCub that acted as a stimuli
demonstrator. Participants reproduced the lengths shown by the robot in two
conditions: one with iCub behaving socially and another with iCub acting as a
mechanical arm. The different behavior of the robot significantly affected the
use of prior in perception. Moreover, the social robot positively impacted
perceptual performances by enhancing accuracy and reducing participants overall
perceptual errors. Finally, the observed phenomenon has been modelled following
a Bayesian approach to deepen and explore a new concept of shared perception.Comment: 14 pages, 9 figures, 1 table. IEEE Transactions on Cognitive and
Developmental Systems, 202
Visuomotor adaptation to a visual rotation is gravity dependent
International audienceHumans perform vertical and horizontal arm motions with different temporal patterns. The specific velocity profiles are chosen by the central nervous system by integrating the gravitational force field to minimize energy expenditure. However, what happens when a visuomotor rotation is applied, so that a motion performed in the horizontal plane is perceived as vertical? We investigated the dynamic of the adaptation of the spatial and temporal properties of a pointing motion during prolonged exposure to a 90 degrees visuomotor rotation, where a horizontal movement was associated with a vertical visual feedback. We found that participants immediately adapted the spatial parameters of motion to the conflicting visual scene in order to keep their arm trajectory straight. In contrast, the initial symmetric velocity profiles specific for a horizontal motion were progressively modified during the conflict exposure, becoming more asymmetric and similar to those appropriate for a vertical motion. Importantly, this visual effect that increased with repetitions was not followed by a consistent aftereffect when the conflicting visual feedback was absent (catch and washout trials). In a control experiment we demonstrated that an intrinsic representation of the temporal structure of perceived vertical motions could provide the error signal allowing for this progressive adaptation of motion timing. These findings suggest that gravity strongly constrains motor learning and the reweighting process between visual and proprioceptive sensory inputs, leading to the selection of a motor plan that is suboptimal in terms of energy expenditure
Moody Learners -- Explaining Competitive Behaviour of Reinforcement Learning Agents
Designing the decision-making processes of artificial agents that are
involved in competitive interactions is a challenging task. In a competitive
scenario, the agent does not only have a dynamic environment but also is
directly affected by the opponents' actions. Observing the Q-values of the
agent is usually a way of explaining its behavior, however, do not show the
temporal-relation between the selected actions. We address this problem by
proposing the \emph{Moody framework}. We evaluate our model by performing a
series of experiments using the competitive multiplayer Chef's Hat card game
and discuss how our model allows the agents' to obtain a holistic
representation of the competitive dynamics within the game.Comment: Accepted by ICDl-EPIROB 202
- …