Skip to main content
Article thumbnail
Location of Repository

Learning Finite State Machine Controllers from Motion Capture Data

By Marco Gillies

Abstract

With characters in computer games and interactive media increasingly being based on real actors, the individuality of an actor's performance should not only be reflected in the appearance and animation of the character but also in the Artificial Intelligence that governs the character's behavior and interactions with the environment. Machine learning methods applied to motion capture data provide a way of doing this. This paper presents a method for learning the parameters of a Finite State Machine controller. The method learns both the transition probabilities of the Finite State Machine and also how to select animations based on the current state

Publisher: IEEE
Year: 2009
OAI identifier: oai:eprints.gold.ac.uk:2290

Suggested articles

Citations

  1. (1999). A hierarchical approach to interactive motion editing for human-like
  2. (2006). An aectively-driven planner for synthetic characters." in ICAPS
  3. (1994). An input output hmm architecture,"
  4. (2006). Building expression into virtual characters,"
  5. (2004). c, \Example-based control of human motion,"
  6. (1999). Cognitive modeling: Knowledge, reasoning and planning for intelligent characters,"
  7. (2007). Controlling individual agents in high-density crowd simulation.”
  8. (2007). Controlling individual agents in high-density crowd simulation."
  9. (2004). Example-based control of human motion,”
  10. (2005). Geostatistical motion interpolation,"
  11. (2007). Group behavior from video: A data-driven approach to crowd simulation,”
  12. (2007). Group behavior from video: A data-driven approach to crowd simulation,"
  13. (2002). Integrated learning for interactive synthetic characters,"
  14. (2002). Interactive control of avatars animated with human motion data,"
  15. (2002). Interactive motion generation from examples,"
  16. (2005). Learning physics-based motion style with nonlinear inverse optimization,"
  17. (1997). Motion editing with space time constraints," in symposium on interactive 3D graphics,
  18. (2002). Motion graphs,"
  19. (2007). Near-optimal character animation with continuous control,"
  20. (1998). Practical parameterization of rotations using the exponential map,"
  21. (2004). Precomputing avatar behavior from human motion data,"
  22. (1998). Reinforcement Learning: An Introduction.
  23. (2007). Responsive characters from motion fragments,"
  24. (2008). Responsive listening behavior,"
  25. (2008). Simulating interactions of avatars in high dimensional state space,”
  26. (2008). Simulating interactions of avatars in high dimensional state space,"
  27. (2003). Snap-together motion: Assembling run-time animation,"
  28. (2001). Spherical averages and applications to spherical splines and interpolation,"
  29. (2007). State-annotated motion graphs,"
  30. (2000). Style machines,”
  31. (2000). Style machines,"
  32. (2005). Style translation for human motion,”
  33. (2005). Style translation for human motion,"
  34. (2002). Subtleties of facial expressions in embodied agents.”
  35. (2002). Subtleties of facial expressions in embodied agents."
  36. (2007). Towards natural gesture synthesis: Evaluating gesture units in a data-driven approach to gesture synthesis,” in IVA,
  37. (2007). Towards natural gesture synthesis: Evaluating gesture units in a data-driven approach to gesture synthesis," in IVA,
  38. (2008). Two-character motion analysis and synthesis,”
  39. (2008). Two-character motion analysis and synthesis,"

To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.