Understanding Objects and Actions: a VR Experiment

Abstract

The human capability to interpret actions and to recognize objects is still far ahead of that of any technical system. Thus, a deeper understanding of how humans are able to interpret human (inter)actions lies at the core of building better artificial cognitive systems. Here, we present results from a first series of perceptual experiments that show how humans are able to infer scenario classes, as well as individual actions and objects from computer animations of everyday situations. The animations were created from a unique corpus of real-life recordings made in the European project POETICON using motion-capture technology and advanced VR programming that allowed for full control over all aspects of the finally rendered data

    Similar works