1 research outputs found
Human and Machine Action Prediction Independent of Object Information
Predicting other people's action is key to successful social interactions,
enabling us to adjust our own behavior to the consequence of the others' future
actions. Studies on action recognition have focused on the importance of
individual visual features of objects involved in an action and its context.
Humans, however, recognize actions on unknown objects or even when objects are
imagined (pantomime). Other cues must thus compensate the lack of recognizable
visual object features. Here, we focus on the role of inter-object relations
that change during an action. We designed a virtual reality setup and tested
recognition speed for 10 different manipulation actions on 50 subjects. All
objects were abstracted by emulated cubes so the actions could not be inferred
using object information. Instead, subjects had to rely only on the information
that comes from the changes in the spatial relations that occur between those
cubes. In spite of these constraints, our results show the subjects were able
to predict actions in, on average, less than 64% of the action's duration. We
employed a computational model -an enriched Semantic Event Chain (eSEC)-
incorporating the information of spatial relations, specifically (a) objects'
touching/untouching, (b) static spatial relations between objects and (c)
dynamic spatial relations between objects. Trained on the same actions as those
observed by subjects, the model successfully predicted actions even better than
humans. Information theoretical analysis shows that eSECs optimally use
individual cues, whereas humans presumably mostly rely on a mixed-cue strategy,
which takes longer until recognition. Providing a better cognitive basis of
action recognition may, on one hand improve our understanding of related human
pathologies and, on the other hand, also help to build robots for conflict-free
human-robot cooperation. Our results open new avenues here.Comment: This paper includes 31 pages, 11 figures and 1 tabl