368 research outputs found
Telerobotic Pointing Gestures Shape Human Spatial Cognition
This paper aimed to explore whether human beings can understand gestures
produced by telepresence robots. If it were the case, they can derive meaning
conveyed in telerobotic gestures when processing spatial information. We
conducted two experiments over Skype in the present study. Participants were
presented with a robotic interface that had arms, which were teleoperated by an
experimenter. The robot could point to virtual locations that represented
certain entities. In Experiment 1, the experimenter described spatial locations
of fictitious objects sequentially in two conditions: speech condition (SO,
verbal descriptions clearly indicated the spatial layout) and speech and
gesture condition (SR, verbal descriptions were ambiguous but accompanied by
robotic pointing gestures). Participants were then asked to recall the objects'
spatial locations. We found that the number of spatial locations recalled in
the SR condition was on par with that in the SO condition, suggesting that
telerobotic pointing gestures compensated ambiguous speech during the process
of spatial information. In Experiment 2, the experimenter described spatial
locations non-sequentially in the SR and SO conditions. Surprisingly, the
number of spatial locations recalled in the SR condition was even higher than
that in the SO condition, suggesting that telerobotic pointing gestures were
more powerful than speech in conveying spatial information when information was
presented in an unpredictable order. The findings provide evidence that human
beings are able to comprehend telerobotic gestures, and importantly, integrate
these gestures with co-occurring speech. This work promotes engaging remote
collaboration among humans through a robot intermediary.Comment: 27 pages, 7 figure
Activity in Inferior Parietal and Medial Prefrontal Cortex Signals the Accumulation of Evidence in a Probability Learning Task
In an uncertain environment, probabilities are key to predicting future events and making adaptive choices. However, little is known about how humans learn such probabilities and where and how they are encoded in the brain, especially when they concern more than two outcomes. During functional magnetic resonance imaging (fMRI), young adults learned the probabilities of uncertain stimuli through repetitive sampling. Stimuli represented payoffs and participants had to predict their occurrence to maximize their earnings. Choices indicated loss and risk aversion but unbiased estimation of probabilities. BOLD response in medial prefrontal cortex and angular gyri increased linearly with the probability of the currently observed stimulus, untainted by its value. Connectivity analyses during rest and task revealed that these regions belonged to the default mode network. The activation of past outcomes in memory is evoked as a possible mechanism to explain the engagement of the default mode network in probability learning. A BOLD response relating to value was detected only at decision time, mainly in striatum. It is concluded that activity in inferior parietal and medial prefrontal cortex reflects the amount of evidence accumulated in favor of competing and uncertain outcomes
- …