Dynamic changes in complex, real-time environments, such as modern video games, can violate an agent’s expectations. We describe a system that responds competently to such violations by changing its own goals, using an algorithm based on a conceptual model for goal driven autonomy. We describe this model, clarify when such behavior is beneficial, and describe our system (which employs an HTN planner) in terms of how it partially instantiates and diverges from this model. Finally, we describe a pilot evaluation of its performance for controlling agent behavior in a team shooter game. We claim that the ability to selfselect goals can, under some conditions, improve plan execution performance in a dynamic environment
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.