3 research outputs found
Shared perception is different from individual perception: a new look on context dependency
Human perception is based on unconscious inference, where sensory input
integrates with prior information. This phenomenon, known as context
dependency, helps in facing the uncertainty of the external world with
predictions built upon previous experience. On the other hand, human perceptual
processes are inherently shaped by social interactions. However, how the
mechanisms of context dependency are affected is to date unknown. If using
previous experience - priors - is beneficial in individual settings, it could
represent a problem in social scenarios where other agents might not have the
same priors, causing a perceptual misalignment on the shared environment. The
present study addresses this question. We studied context dependency in an
interactive setting with a humanoid robot iCub that acted as a stimuli
demonstrator. Participants reproduced the lengths shown by the robot in two
conditions: one with iCub behaving socially and another with iCub acting as a
mechanical arm. The different behavior of the robot significantly affected the
use of prior in perception. Moreover, the social robot positively impacted
perceptual performances by enhancing accuracy and reducing participants overall
perceptual errors. Finally, the observed phenomenon has been modelled following
a Bayesian approach to deepen and explore a new concept of shared perception.Comment: 14 pages, 9 figures, 1 table. IEEE Transactions on Cognitive and
Developmental Systems, 202
Can Human-Robot Interaction Promote the Same Depth of Social Information Processing as Human-Human Interaction?
Recent studies on human-robot interactions have suggested that humanoid robots have considerable potential in social cognition research. However, the authors are not aware of any studies regarding social information processing from human-robot interactions. To address this issue, we considered two types of social interaction tasks (initiating and responding joint attention tasks) and two types of interaction partners (robot and human partners). Distinguishing between these types of joint attention (JA) is important, because they are thought to reflect unique but common constellations of processes in human social cognition and social learning. Thirty-seven participants were recruited (Study 1:20 participants, Study 2:17 participants) for the current study, and they conducted a picture recognition social information processing task with either robot or human partners. The results of Study 1 suggested that participants who interacted with a humanoid robot achieved a better recognition memory performance in the initiating JA condition than in the responding JA condition. The results of Study 2 suggested that the human-human and human-robot interactions resulted in no quantifiable differences in recognition memory. We discuss the implications of our results for the utility of humanoid robots in social cognition studies and future research questions on human-robot interactions.This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (NRF-2014R1A1A1005390 and NRF-2016R1E1A2020733)