Learning policies for embodied virtual agents through demonstration


Although many powerful AI and machine learning techniques exist, it remains difficult to quickly cre-ate AI for embodied virtual agents that produces visually lifelike behavior. This is important for ap-plications (e.g., games, simulators, interactive dis-plays) where an agent must behave in a manner that appears human-like. We present a novel technique for learning reactive policies that mimic demon-strated human behavior. The user demonstrates the desired behavior by dictating the agent’s actions during an interactive animation. Later, when the agent is to behave autonomously, the recorded data is generalized to form a continuous state-to-action mapping. Combined with an appropriate animation algorithm (e.g., motion capture), the learned poli-cies realize stylized and natural-looking agent be-havior. We empirically demonstrate the efficacy of our technique for quickly producing policies which result in lifelike virtual agent behavior.

Similar works

Full text

oai:CiteSeerX.psu: time updated on 11/1/2017

This paper was published in CiteSeerX.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.